Category Archives: Game Theory

Summary of the Prisoner’s Dilemma

 Game Theory

For our purposes, a game is an interactive situation in which individuals, called players, choose strategies to deal with each other in attempting to maximize their individual utility. There are several ways of distinguishing games including: 1) in respect to the number of players involved; 2) in respect to the number of repetitions of play; 3) in respect of the order of the various player’s preferences over the same outcomes. On the one extreme are games of pure conflict, so-called zero-sum games, in which players have completely opposing interests over possible outcomes. On the other extreme are games of pure harmony, so-called games of coordination. In the middle are games involving both conflict and harmony in respect of others. It is one particular game that interests us most, since it describes the situation in Hobbes’ state of nature, and is the central problem in contractarian moral theory.

The Prisoner’s Dilemma

The prisoner’s dilemma is one of the most widely debated situations in game theory. The story has implications for a variety of human interactive situations. A prisoner’s dilemma is an interactive situation in which it is better for all to cooperate rather than for no one to do so, yet it is best for each not to cooperate, regardless of what the others do.

In the classic story, two prisoners have committed a serious crime but all of the evidence necessary to convict them is not admissible in court. Both prisoners are held separately and are unable to communicate. The prisoners are called separately by the authorities and each offered the same pro-position. Confess and if your partner does not, you will be convicted of a lesser crime and serve one year in jail while the unrepentant prisoner will be convicted of a more serious crime and serve ten years. If you do not confess and your partner does, then it is you who will be convicted of the more serious crime and your partner of the lesser crime. Should neither of you confess the penalty will be two years for each of you, but should both of you confess the penalty will be five years. In the following matrix, you are the row chooser and your partner the column chooser. The first number in each parenthesis represents the “payoff” for you in years in prison, the second number your partner’s years. Let us assume each player prefers the least number of years in prison possible. In matrix form, the situation looks like this:

Prisoner 2

    Confess  Don’t Confess
 Prisoner 1 Confess (5, 5) (1, 10)
Don’t Confess (10, 1) (2, 2)

So you reason as follows: If your partner confesses, you had better confess because if you don’t you will get 10 years rather than 5. If your partner doesn’t confess, again you should confess because you will only get 1 year rather than 2 for not confessing. So no matter what your partner does, you ought to confess. The reasoning is the same for your partner. The problem is that when both confess the outcome is worse for both than if neither confessed. You both could have done better, and neither of you worse, if you had not confessed! You might have made an agreement not to confess but this would not solve the problem. The reason is this: although agreeing not to confess is rational, compliance is surely not rational!

The prisoner’s dilemma describes the situation that humans found themselves in in Hobbes’ state of nature. If the prisoners cooperate, they both do better; if they do not cooperate, they both do worse. But both have a good reason not to cooperate; they are not sure the other will! We can only escape this dilemma, Hobbes maintained, by installing a coercive power that makes us comply with our agreements (contracts). Others, like the contemporary philosopher David Gauthier, argue for the rationality of voluntary non-coerced cooperation and compliance with agreements given the costs to each of us of enforcement agencies. Gauthier advocates that we accept “morals by agreement.”

Climate Change and the Prisoner’s Dilemma

The Science

To understand climate change you just need basic physics and mathematics. This physics works like this. The earth’s surface temperature is governed by the absorption and emission of thermal radiation, and greenhouse gases (GHG) like CO2 and CH4 (methane) trap thermal radiation making the earth’s surface warmer. The mathematics is even simpler. GHG + GHG = more GHG = more warming. It’s that simple.

The connection between the concentration of GHG and warmer temperatures is well-established, with the analysis reaching back at least 400,000 years. If we look at the last few hundred years we find that CO2 concentrations in the atmosphere were 280 parts per million (ppm) in 1750 and have reached almost 400 ppm today. Models project that, unless forceful steps are taken to reduce fossil fuel use, they will reach 700–900 ppm by 2100. According to climate models, this will lead to a warming averaged over the globe in the range 2 to 11.5 degrees F.

And what is the cause of the increase in the concentration of GHG in the atmosphere? According to the IPCC, the leading international for the assessment of climate change; the National Academy of Sciences of the USA, the leading scientific organization in the United States; and nearly two hundred scientific organizations, it is now beyond any reasonable doubt that humans are the main cause of global climate change.2 3 4

The Problem

Climate change is already beginning to alter the land, air, and water upon which life depends. Higher temperatures, rising sea levels, droughts, floods, fires, changing landscapes, risks to wildlife, economic losses, and heat-related disease are just some of the consequences of changing the planet’s climate. In addition, there are consequences we can’t predict.

The Solutions

One of the first to understand the problem and propose an economic solution was the Yale economist William Nordhaus. Putting GHG into the atmosphere is free, it is an economic externality leading to a “tragedy of the commons.” The solution forces persons, countries, and corporations to pay for the GHG they pump into the atmosphere, thereby reducing the incentive to do it. He has detailed how to do this in his book Climate Casino.6 But how do we get multiple countries to cooperate in this endeavor?

The Climate Casino: Risk, Uncertainty, and Economics for a Warming World

Others, like the Australian public policy professor Clive Hamilton, are even more pessimistic. He worries that as we enter the “climate casino,” humans won’t do anything until the situation is critical. His book, Earthmasters: The Dawn of the Age of Climate Engineering, argues that humans aren’t rational actors and this prevents them from doing what’s necessary to avoid catastrophe.7 This will lead to risky technological fixes–to reckless gambling–like spraying sulfur particles into the stratosphere. Such radical solutions will be more attractive to some capitalist than taxing GHG, but there are catastrophic risks associated with high-tech fixes. Still Hamilton thinks this is what will happen.

Earthmasters: The Dawn of the Age of Climate Engineering

 Why the Problem?

But why is the situation so intransigent? The reason is that humans find themselves in this situation because it has the structure of a Prisoner’s Dilemma (PD). The PD is a non-zero sum game with roughly the following structure: it is one in which we all do better if we all cooperate, yet individually each has a strong incentive not to cooperate regardless of what others do. In the climate change debate the situation is simple. Consider two countries A and B (for the moment we’ll assume there are only two countries in the world) who have to decide to dump or not dump their carbon.

Country                                                         B
Don’t Dump Carbon Dump Carbon
A  (S, S)    (W, B)
(B, W)      (T, T)

The best outcome for a country is one where the other country doesn’t dump carbon and your country does, since they pay to develop, say, greener technologies or pay carbon taxes while you do not. The second best outcome is where we all share the cost by using alternative energy sources and not dumping carbon. The third outcome is where everyone is dumping carbon and the earth’s atmosphere and environment are being destroyed. (This is the situation we are in.) The worst outcome for a country is if they pay the cost of developing and using new technologies but other countries don’t, and the climate changes for the worse anyway.

Of course everyone would do better and no one would do worse if we reached the second best outcome–the environment would be cleaner and catastrophic climate change might be avoided. So how do we get everyone to cooperate?

What are the Ultimate Solutions?

There are only a few realistic solutions to the PD. First we need people to agree to cooperate on the matter by signing a global warming treatise. Of course even if you could get agreement that still would not solve the problem because you have to guarantee that others comply. One way to do this is by negative reinforcement. We would need someone (a world government or the UN) to have the power to punish violators with fines or carbon taxes. Alternatively we could use positive reinforcement, by offering huge incentives for developing climate-friendly technologies. More radically we could use disablement strategies. We could outlaw oil companies and methane producing factory farms, but this too would demand an international coercive power, hardly realistic at this point. Perhaps most radically of all we could use technology to change human nature itself, say by using genetic engineering to make us more cooperative. Needless to say this is probably as risky as anything Hamilton has in mind.

I think we’ll probably reach a point at which we will be forced to try some risky high-tech solution to survive, hoping that our science and technology save us.


1. CO2 concentrations in the atmosphere were 280 parts per million (ppm) in 1750 and have reached 390 ppm today. Models project that, unless forceful steps are taken to reduce fossil fuel use, they will reach 700–900 ppm by 2100. According to climate models, this will lead to a warming averaged over the globe in the range.

Ethical Theory

 We are discussing no small matter, but how we ought to live. ~ Socrates In Plato’s Republic 


There are many theories that deny morality: nihilism; determinism; skepticism; relativism; egoism; etc. In my view ethicists too easily dismiss these theories—they have philosophical merit. Nihilism just “feels” wrong, but all of the others are at least partly true and appeal to me to varying degrees.

Most ethical theories try to justify morality.1 Typically this justification has been supplied by: self-interest—theories deriving from Plato and Hobbes; sympathy—theories deriving from Hume and Mill; nature—theories deriving from Aristotle and Aquinas; or reason—theories deriving from Kant and Locke. Let us briefly consider each in turn.

Some contemporary thinkers, Darwall and Gewirth come to mind, have tried to justify morality following Kant. However, few philosophers believe this project has been successful. At most, I would argue, these theories show that morality is weakly rational, i.e., morality is not clearly irrational. But I don’t see how they can show me how another person’s interests give me a reason to do anything.

Few contemporary thinkers have advanced natural law theories in the tradition of Aristotle and Aquinas. Contemporary thinkers try to bridge the is/ought gap with an evolutionary ethics or moral psychology utilizing knowledge of human nature unavailable to ancient and medieval philosophers.2  These projects show more promise.

Theories deriving from considerations of sympathy are also promising. Mill’s utilitarianism was based on a “social feeling,” Hume thought sympathy the basis of morality, Darwin had an entire theory of moral sentiments, and the contemporary philosopher Kai Nielsen places great emphasis on the role of sympathy in morality. It is hard to imagine a justification of the moral life without a role for sympathy.  

Theories deriving from self-interest are promising, and contemporary contract and game theorists, particularly Gauthier, have gone a long way toward sustaining and revitalizing the Hobbesian project. Nonetheless, their results are inconclusive and it is not clear that this approach can resolve the compliance problem. However, combining a contract approach with considerations of our evolutionary nature and ingrained or acquired human sympathies may have more promise.

Finally, there are ethical theories associated with religious and metaphysical views, but lack of agreement about these views precludes any hope of grounding morality in them. (Of course the same may be said about one or another of our moral theories—that they all suppose some metaphysic and that the dispute about ethics depends on resolving metaphysical issues first.)


Let’s explore the issue of self-interest vs. morality—what we might call the hard question of morality—in more detail. (To put it another way, should we care about others less, the same, or more than we care about ourselves?) Hobbes answered the question why we should moral—it is in all of our interests. Still, the question why I should be moral remains unanswered. This is the challenge originally set forth in Plato’s Republic—why should I be moral if I have a ring that makes me invisible? Why be moral if SI demands an immoral course? In short, doesn’t it pays to steal candy when no one is looking and you want candy?

Let’s begin with the prisoner’s dilemma (PD). It is easy to see that self-interest demands defection, a supposedly non-moral move, in a one-time PD. So here self-interest and ordinary morality conflict. The fact that both parties do better through mutual cooperation somewhat ameliorates this conclusion, but does not change the fact that it is better for one to not comply no matter what the other does.

The situation changes when the PD is iterated, since tit-for-tat (TFT) has been shown to be a robust strategy. But recent work by Ken Binmore has challenged this assumption. (The “Folk-theorem” is also relevant here.) It is not that TFT is a bad strategy, but that real life is more complex than iterated PDs can model. There may be an infinite number of strategies which are robust, calling into question whether we can even determine what is in our self-interest. And if we don’t know what’s in our interest, how can self-interest ground morality?

Well to begin to answer these questions, consider again our candy stealers. They may arrogantly assume they won’t get caught or suffer the pangs of conscience, that the cameras aren’t rolling, or that we won’t perceive their true motivations and exclude them from cooperation. In short, they can’t determine what is in their self-interest. But can they make an educated guess? Not really. It is too difficult to know the repercussions of their acts and impossible to predict what adopting a disposition to behave will cost them in the long run.3 The complexity of the situation makes complete assessment impossible and reliable judgment unlikely, raising doubts about applying any moral theory to a complex world of interactions with other agents whose psychologies, motives, disposition and intents are difficult to determine if not opaque.

Thus it is unlikely that self-interest can ground morality or immorality since self-interest can’t be determined with accuracy. So where to from here? In large part, I find myself agreeing with the contemporary philosopher Kai Nielsen. 4

Nielsen wants to know if we have good reasons to assume the immoralist is mistaken. He accepts the view that morality entails sympathy and sensitivity to others, but some people are not moved by such considerations. So, why should those people be moral? Surely pursuing self-interest to the exclusion of morality is not irrational, despite the fact that philosophers as varied as Hobbes, Plato, and Aristotle tell us that the moral life and the happy life are synonymous. But can we really be so sure? Nielsen maintains that whether the bad guys are happy or not depends on what kinds of persons they are; and I agree. Neither rationality nor happiness requires morality: we must simply decide for ourselves how we should act and what sort of persons we will strive to be or become.

This means that considerations of reason, happiness, and self-interest, in the absence of sympathy and a commitment to the moral life, cannot adjudicate between morality and self-interest.5 While both Neilsen and myself find this situation somewhat depressing, we accept that we cannot get to morality with intellect alone. From an objective point of view, reason is impotent to determine our values and thus the moral life demands a non-rational, voluntary commitment. In other words, the moral life and the immoral one are Kantian antimonies, and the choice between them interfused with existential angst. In the end we simply choose … and hope.


I don’t think so. Can we say anything with confidence? We may be able to say that moral rules are contracts or agreements between self-interested people for mutual benefit assuming that others will reciprocate. And that these rules resulted from a protracted process of bargaining and power-struggling from the original biological foundations in reciprocal altruism and kin selection. Of course we can revolt against biology; we can abandon our children. We choose our own destiny. But moral behavior, like all behavior, always has part of its explanation in its origins. And what can we say regarding what morality should be? Maybe that moral rules ought to promote human flourishing? This strikes me as intuitive, but then, as Wilson told us, we consult our emotions like hidden oracles. And why think our intuitions supply insight into the truth? Perhaps it is better, as Wittgenstein suggested, to remain silent about that which we don’t know.


Still, the problem remains. Some individuals don’t comply with their agreements, and viciously flaunt their disregard of the social contract. Does it help to know that we can’t give good self-interested reasons to comply with the social contract? It doesn’t seem so. Traditionally we relied on moral education as the way of insuring that persons became cooperators. And if they didn’t, we penalized them. Maybe punishment would resolve the problem? Maybe the expansion of cameras will eliminate the invisibility that encourages immoralism. Or maybe, as Aristotle imagined, we can structure society so as to inculcate in persons the kinds of habitual behaviors that benefit us all? Of course, if Aristotle had been aware of behavior modification, mind control, and genetic engineering, he might have advocated more drastic measures to ensure human flourishing.

In fact, if mutual cooperation becomes important enough, ethics may become a branch of applied engineering. We may have to engineer ourselves, removing tendencies adaptive for foragers, but suicidal for beings with technology. Of course we would lose the freedom to, say, release chemical or nuclear weapons, but this may be a small price to pay for security. And maybe engineering ourselves won’t entail a loss of freedom, but instead free us from some residual effects of our evolution, from overt aggressions and other tendencies that are now anachronistic in a technological world. But whatever we choose to do, one thing is certain, we alone are the stewards of the future of life and mind on this small outpost in an infinite cosmos. We alone must decide where we want to go.


Remember that none of the above implies that it is irrational to be moral, only that rationality alone can’t get us to morality. This isn’t to say there aren’t good reasons to be moral. There are. Immoralists might be punished and lose the benefits of cooperation; and moralists don’t have to be looking over their shoulder and may have more friends. All we have said is that we can’t show that the reasons to be moral outweigh the reasons to be immoral, if you benefit from and can get away with immorality.

And we have also suggested that it is becoming increasingly within our power to remake the world and ourselves in such a way that no one can benefit from or get away with immorality. While some will object that nightmarish scenarios will follow from our increasing control of immoral behavior, it is quite likely that we will all benefit from a world in which peaceful living can be secured by the application of our knowledge. Ironically, our inability to convincingly answer the why should I be moral question in theory, will lead to our answering it in practice. In short, there never have been completely convincing reasons to be moral, evidenced by the barbarism of human history,  but, desperately in need of morality for our survival and flourishing, we will freely choose to transform ourselves by all means at our disposal.

In retrospect, biology and evolutionary stable strategies imposed early moral constraints, philosophical and religious education furthered the project, governments provided the muscle that conscience lacked, and now it is up to us to continue the project so that immorality doesn’t kill us. So we will be the ones who ultimately create the answer to the why be moral question.


  1. Morality defined as a system demanding that persons express care, concern, and interest in others; exemplified by moral rules such as: “don’t kill, lie, cheat, or steal;” “help others;” etc.
  2. Virtue ethics, with roots in Plato, Aristotle, the Stoics and Epicureans, and some early Christians, has enjoyed renewed success in the late 20th century thru the work of Anscombe and MacIntyre. But I view virtue ethics as part of some other overall theory of ethics and not as a complete theory in itself.
  3. In addition our passions often cause us to misread situations. Maybe I think there is little chance of being caught because I am a compulsive candy stealer.
  4. Nielsen, Kai. “Why Should I Be Moral?—Revisited” American Philosophical Quarterly 21, January 1984.
  5. There is however one possible way out of this conundrum. SI justifications of morality are especially difficult because we work with isolated senses of self. If the self is separate from others, if our games are mostly non-zero sum, it is hard to see why we should care about others. But if the other is an extension of ourselves, then helping others is SI by definition. In that case, zero-sum games are illusory. The problem is that this broad view of self is counter-intuitive.
  6. This essay was composed over a single weekend; it is not a substitute for sustained philosophical reflection and research.
  7. For an excellent introduction to ethical theory Louis P. Pojman Ethical Theories: Classic and Contemporary Reading or James Rachels’ The Elements of Moral Philosophy.