A few years ago, just before I discovered the Austrian School, I read The Evolution of Cooperation by Robert Axelrod. Austrians are somewhere between suspicious and dismissive of game theory (see this paper [pdf] for an exception and this article for a more typical example), but I find the central "point" of this book quite compelling and relevant to libertarianism. I’ll explain why after this humorous interruption from my mother:
As a young boy enters a barber shop, the barber whispers to his customer, “This is the dumbest kid in the world. Watch while I prove it to you.”
The barber puts a dollar bill in one hand and two quarters in the other, then calls the boy over and asks, “Which do you want, son?”
The boy takes the quarters and leaves.
“What did I tell you?” says the barber. “That kid never learns!”
Later, when the customer leaves, he sees the same young boy coming out of the ice cream store. “Hey, son!" he says. "May I ask you a question? Why did you take the quarters instead of the dollar bill?”
The boy licks his cone and replies, “Because the day I take the dollar, the game’s over!”
That’s one she sent me last week in email. I laughed out loud and then thought about it. It reminded me of Axelrod’s book, which is also about how the meaning of a single event is turned upside down when we can expect the event to be iterative — when, in other words, we expect it to repeat. How’s that for humorless nerd talk?
The boy seems stupid when we think he believes $1 < 50¢. He seems surprisingly cunning when we realize he knows $1 < 50¢+50¢+50¢…
(I won’t even touch the question of time preference, though you’ll notice the joke implicitly includes that concept, as well.)
So Axelrod’s book is about a similar shift involving the prisoner’s dilemma.
In its "classical" form, the prisoner’s dilemma (PD) is presented as follows:
Two suspects are arrested by the police. The police have insufficient evidence for a conviction, and, having separated both prisoners, visit each of them to offer the same deal. If one testifies ("defects") for the prosecution against the other and the other remains silent, the betrayer goes free and the silent accomplice receives the full 10-year sentence. If both remain silent, both prisoners are sentenced to only six months in jail for a minor charge. If each betrays the other, each receives a five-year sentence. Each prisoner must choose to betray the other or to remain silent. Each one is assured that the other would not know about the betrayal before the end of the investigation. How should the prisoners act? (Wikipedia)
Pure self-interest, guided by reason, will lead a prisoner to rat out his partner.
The standard interpretation of this classical prisoner’s dilemma is that rational self-interest guides individuals to reject cooperation, even when cooperation assures the greatest good for the greatest number. And the standard interpretation of that standard interpretation is that therefore we need a coercive authority to impose cooperation on us for our own good.
To borrow the Google Books summary of The Evolution of Cooperation,
This widely praised and much-discussed book explores how cooperation can emerge in a world of self-seeking egoists—whether superpowers, businesses, or individuals—when there is no central authority.
Axelrod changed the rules to create the "iterated prisoner’s dilemma" (IPD), wherein prisoner A and prisoner B face the classical prisoner’s dilemma over and over again, remembering what decisions were made and what results occurred in previous iterations. He invited others to submit strategies (programmed in BASIC) to compete in an IPD tournament.
The result: the best strategy was called “Tit-for-Tat” in which the player is always cooperative with strangers and always imitates the last move, cooperative or uncooperative, of any player whose game history is known.
That result is already interesting, and the Tit-for-Tat strategy seems to me to be something you could reasonably call “the libertarian strategy”: don’t hit first; do hit back.*
As the Wikipedia page puts it, “Axelrod reached the Utopian-sounding conclusion that selfish individuals for their own selfish good will tend to be nice and forgiving and non-envious.”
The even more interesting and “Utopian-sounding” result comes from iterating the already-iterated form of the PD, in which winning strategies “go forth and multiply” where the game rules dictate that losing players adopt the strategies of the players that beat them. The more successful a player’s strategy, the more like-minded players are encountered over time. Tit-for-Tat ends up taking over the world. Eventually everyone cooperates. This is a very different result, obviously, than the one-shot “lesson” of the classical prisoner’s dilemma.
My favorite part of Axelrod’s book is the historical section, where he applies the Tit-for-Tat insights to examples of spontaneous cooperation among strangers and enemies across battle lines. Unfortunately, while most of his conclusions are libertarian friendly, he also draws some very interventionist conclusions about the need to prevent the forms of spontaneous cooperation that might take place between market competitors in the absence of antitrust policing.
Despite what might seem like two strikes against it (from an Austrolibertarian perspective), I still recommend the book to anyone who is trying to think through the dynamics of cooperation and self-interest.
* Pacifist libertarians might object to my summary of libertarianism as “don’t hit first; do hit back,” and they’d be right: the libertarian strategy says don’t hit first; whether or not to hit back is, technically, outside the limits of libertarian theory.