Costly punishment can facilitate cooperation in public-goods games, as human subjects will incur costs to punish non-cooperators even in settings where it is unlikely that they will face the same opponents again. Understanding when and why it occurs is important both for the design of economic institutions and for modeling the evolution of cooperation. Our experiment shows that subjects will engage in costly punishment even when it will not be observed until the end of the session, which supports the view that agents enjoy punishment. Moreover, players continue to cooperate when punishment is unobserved, perhaps because they (correctly) anticipate that shirkers will be punished: Fear of punishment can be as effective at promoting contributions as punishment itself.
“The folk theorem of repeated games has established that cooperative behavior can be sustained as an equilibrium in repeated settings. Early papers on private monitoring and a recent paper of Cole and Kocherlakota (Games and Economic Behavior, 53 , 59–72) challenge the robustness of this result by providing examples in which cooperation breaks down when players observe only imperfect private signals about other players’ actions, or when attention is restricted to strategies with finite memory. This paper shows that Cole and Kocherlakota’s result is an artefact of a further restriction that they impose. We prove that the folk theorem with imperfect public monitoring holds with strategies with finite memory. As a corollary, we establish that the folk theorem extends to environments in which monitoring is close to public, yet private.”
I really like this JEP piece on the economics of on-line crime by Tyler Moore, Richard Clayton and Ross Anderson ( http://www.aeaweb.org/articles.php?doi=10.1257/jep.23.3.3 ) for a few reasons:
1. The article is filled with softball lob questions and statements calling for more regulation that libertarian-minded economists with computer knowledge can and should knock out of the park. I recently read David Friedman’s “Future Imperfect,” and I didn’t realize how important of a book it was until I read this piece. Internet security and crime is a fertile ground for spontaneous order stories, self-enforcement and self-regulation. One part of the article notes how in the U.K., the police units specializing in Internet crime rely on funding from the banking industry (as it should be).
One example is that they compare security software firms to Akerlof’s markets for lemons model, arguing that it is hard to discern the quality of the security of software. Though I agree it is hard for a consumer to be able to understand the complexities of computer codes and their weaknesses, there are several organizations that test and rate this security software. A company that persistently creates vulnerable software will be weeded out through the testing process and news releases.
2. One anecdote explains how private security companies use to keep their virus lists private, in order to beat out competitors in trials that would see which product was able to stop the most viruses. The security industry, realizing that they would all be better off, agreed at the EICAR conference to sharing their virus lists with competitors. This is an example of a self-enforcing coordination operating (as opposed to a prisoner’s dilemma collusion problem).
3. They discuss, certainly not in enough depth, how laws that make it criminal to possess any child sex material, regardless of the circumstance, prevents private companies (ISPs etc.) from hunting down and taking down these sites as finding them entails possessing it, even if it is just as an on-screen image.
4. Finally, the paper, in the conclusion, states “This collective action problem is best dealt with by private-sector information sharing, as it was 15 years ago in the world of computer viruses.”