The Ethics of the Prisoner’s Dilemma

The Prisoner’s Dilemma is a concept used to model a strategic interaction in which actors choosing their behaviors rationally according to their own self-interest make everyone worse off than they could have been otherwise. This particular “game” is used both to understand failures of cooperation such as arms races and ethnic warfare and to prescribe particular solutions designed to elicit cooperation. The key feature of the game is that, when the game is played only once, no matter what another player does (cooperating with me or trying to exploit me), I am better off trying to exploit the other player – so in the end, every player exploits rather than cooperates, and they are all worse off than they would have been could someone have “forced” them to cooperate. What has been less often analyzed, to my knowledge, is the ethics of the Prisoner’s Dilemma game.

Whether one has a duty to cooperate with others in Prisoner’s Dilemma-like situations is an important question both for policy and for daily life. Take the question of one’s duties toward the environment. The environment is in many aspects a public good subject to Prisoner’s Dilemma problems. Clean air, clean water, and biodiversity are benefits that we all enjoy, and from which non-contributors cannot feasibly be excluded. Therefore, people have an incentive to take less care of the environment than they would could the environment be privatized. Whether other people “do their part” or not, I’m better off not trying to contribute.

So let’s take some examples of things one could do for the benefit of the environment: eating less meat; polluting less by, e.g., driving less; propagating native species and destroying invasive species; reducing, reusing, and recycling; not littering; not spraying pesticides. Assume for the sake of argument that we will all benefit if everyone did these things. Do we then have a duty to do them? Would it be wrong not to do them?

I’ll derive my view from a very simple starting point: One has a duty not to exploit others, but one does not have a duty to allow oneself to be exploited. In the simple Prisoner’s Dilemma game, each player has only two options: cooperate (and be exploited) or defect (and exploit). In real life, however, there are different gradations of action, from, e.g., walking or riding a bicycle everywhere to driving a Hummer. Moreover, cooperation isn’t actually zero, and therefore cooperation doesn’t always entail being exploited. These considerations imply that some degree of cooperation in Prisoner’s Dilemma situations might actually be morally mandatory, but that devoting your life to providing public goods for others would not be.

Now, the latter part of the starting point could be made even stronger. Let’s say that not only does one not have a duty to allow oneself to be exploited, but one does not have a general duty to sacrifice one’s own interests for the benefit of others. Then, the benefits of the existing scheme of mutual cooperation, including your own, must be greater to you, individually, than the costs of your individual contribution, for that contribution to be morally mandatory. To see this, suppose it were otherwise. Suppose that your efforts on behalf of the environment, say, actually made you worse off than you would be if no one did what you did, including yourself. If that were the case, then you would be making yourself worse off for the benefit of others. That would count as a praiseworthy and supererogatory sacrifice, but not a moral requirement.

So here are my tentative conclusions. If your efforts, combined with the really existing efforts of everyone else, make you better off (taking opportunity costs into account) relative to a situation in which no one undertakes effort, then you have a moral duty to make those efforts. To do otherwise would be to free-ride on the efforts of others and thus to exploit them, which is wrong. If this condition is not satisfied, however, you do not have a duty to contribute – but it would still be praiseworthy to do so, unless the effort is clearly hopeless, in which case the impartial observer is more likely to have pity on your madness than praise for it. I actually think this is a rather strong conclusion and implies that we have a duty to undertake some (but not extraordinary) positive action on behalf of the environment, for instance. What remains interesting and unusual about the Prisoner’s Dilemma is that it models a set of cases for which the rightness of one person’s actions apparently depends on what others are doing.

14 thoughts on “The Ethics of the Prisoner’s Dilemma

  1. Jason, one recent development in moral philosophy (yes! there really is development in moral philosophy!) has been thinking that some important features of our moral lives resist adequate understanding if we come at them atomically, that is, as thinking about individual moral agents reacting to and interacting with other atomic individuals. The thought is, it may be that some of these features of our lives are essentially relational.

    I think the story of obligations is like this. We can’t make sense of our obligations except when we begin by thinking that many of these are obligations to others who are reciprocally in obligation with us. When we think that way, we think about the nature or content of the obligations as essentially informed by the reciprocity inherent in the relationship that gives rise to the obligation.

    I think that’s the case with some of the considerations you’re looking at in PD. They are by nature binary exercises in rational (or moral) agency, not unilateral. And I think that informs the content of the obligations we have in them. Some are pretty plausibly conditional in just some of the ways you suggest. On the other hand, that might require thinking of some obligations (e.g. obligations to the environment) as taking this sort of relational form when they don’t naturally appear that way. So it might be that your analysis would work better for some kinds of obligations, while others might really want a different analysis (I myself am inclined to think of environmental obligations in a different way). Or we might want to think of ways of transforming other kinds of obligations into this sort of form (that is just what contracts do, for example, and some kinds of promising).

  2. I think the example of promises & contracts is a good analogy. If a person breaks her promise to you, do you still have to follow through on your end of the bargain? I would say not (in general). So your obligation is dependent on others’ following through on their obligations.

    I would note that this whole analysis assumes that our environmental obligations have only to do with the benefits that people obtain from the environment. There’s nothing in here about obligations to the environment as such. I’m open to the possibility that there are such obligations as well.

  3. One of the problems here is defining the meaning “better off”. There’s a moral aspect to how you decide what makes you better off, and this provides circular feedback into your assessment of your duties in that regard.

    For instance, I could conclude that the problems of the environment are best ignored; that the longer we live the high life, the better off I’ll be, and most probably the crash that results from that will come after my death. So I’m definitely better off to trash and despoil.

    On the other hand, I have children. They’ll grow into the world that I/we leave them. So maybe I’m not better off to plow on making the crash bigger and bigger….

    So not only is the Prisoner’s Dilemma too crude because it’s choices are binary, unlike real life, it’s also too crude because it’s outcomes are binary, unlike the real world.

    1. I agree with that point, and I would try to employ the idea of “better off, all things considered,” which would include one’s natural affection for one’s children, neighbors, and others.

    2. I really like Oz’s point. The utilities that go into PD calculations are often not taken to matter in content. But as Oz says, it’s crazy to think we do not care about moral goods, and it is good for us that we do. So part of the value you are trying to maximize (in the PD model) might be realization of the very values you are trying to derive on the back end. That is, if you care deeply about the environment already (and non-conditionally), the calculations are going to look very different. (And I think there’s a good argument to be made for having that end.

      Of course, what that might actually mean is that these cases don’t after all fit the PD model, if the utilities don’t fall out right.

      1. what that might actually mean is that these cases don’t after all fit the PD model, if the utilities don’t fall out right.

        Indeed. More broadly, I suppose moral reasoning is always somewhat redundant when one’s personal preferences always line up with the right choice! Moral obligation only “bites” when it requires one to do something one otherwise would not.

      2. Jason, I’d put the point a little differently, since I think that way of putting it takes morality to be mostly a matter of constraint, and I don’t think that’s the best way of thinking of it. A different way to think about the same upshot for our reasons for good environmental stewardship would be to say, not so much that we have obligations to someone or something to be (though perhaps there is space to do so), but that in the first instance we have reasons to be people who see good stewardship as something they want to engage in anyway. And then at that point it’s clear that the PD is not the right model for those reasons.

  4. Ah, that sounds like a more Aristotelian way of putting it! As a nonspecialist, the differences between virtue ethics and deontology have always seemed like a chicken & egg problem. Do we have obligations of stewardship because a person who exercises stewardship is the sort of person we should want to be, or should we want to be the sort of person who exercises stewardship because we have obligations of stewardship? I’m not sure it’s necessary to look at it one way or the other, but I’m willing to be convinced.

    Would a virtue ethicist maintain that for an ethical person, there are no Prisoner’s Dilemmas? (Because you never want to take advantage of another person, even if you could.)

    1. I guess I’d be surprised if there were none, but once you accept that welfare (wellbeing/utilities) can be “moralized” (can include the realization of ends that are often thought to be required by morality), as Oz insists, then it’s clear that a lot of prospects no longer so promising.

      And you’re right that it is a more Aristotelian way of thinking about things. But I actually think that framework offers the best perspective for understanding what reasons are and why we have the reasons we do, morally or otherwise. The constraint-first model seems to me to insist on mystery rather than insight. Of course, such concerns are job security for moral philosophers!

  5. Why wouldn’t the calculated incentives include whatever moral quantities you are trying to introduce? That’s the nice thing about game theory: you can have as many convolutions as you like in establishing the value of the incentives to cooperate and defect.

    (Side note: iterated PDs have the exact opposite outcome, so instead of trying to fudge around the one-off PD it might be a lot more productive to devote thought on how to ensure iteration… and the other problems will take care of themselves. q.v. Douglass Hofstadter on ‘super-rationality.’)

  6. Why wouldn’t the calculated incentives include whatever moral quantities you are trying to introduce?

    I agree that this could be done in any game or rational-choice application whatever, and it would change the payoffs and possibly the equilibrium. But when applied to the moral question itself, it seems somewhat circular to me. So take the argument: “I believe cooperation with my partner is morally required, whatever my partner does; therefore, I feel good when I cooperate and would feel bad if I did not; therefore, I’m better off cooperating no matter what my partner does; therefore, cooperation with my partner is morally required.” But we might want to know whether there really are sufficient reasons to believe cooperation with your partner is morally required in the first place.

    Incidentally, this might be a reason to view moral claims as side constraints (per my conversation with Mark above). If the game really is a Prisoner’s Dilemma, all things considered, then the players of the game will be tempted to defect (by assumption). We might nevertheless want to develop a judgment about whether those players ought not defect. And if we could communicate that concern to the players, that might help them overcome their temptation.

    Or, as you say, you could just iterate the game! But iteration isn’t foolproof; it just creates multiple equilibria, of which mutual cooperation is one.

  7. To this exceedingly interesting question, I think part of the answer may be that it is reiterated PDs in the real world that occasion the need for moral rules–e.g., tit for tat. By definition non-reiterated PDs raise only questions of rationality–can it be rational for me to make a one-time sacrifice knowing you probably won’t?

  8. Real time scenario

    In a routine price cartel situation just before the winter season, managers of the different LPG licensed companies were sitting together in a meeting for deciding a mutually agreed price with extra and unchecked profit during the upcoming winter season. Most of the managers were keen to have a constant increase in the price during winter with their mutual consensus but one of the managers, Mr. Ali suggested that this constant increase in the fuel price will increase the suffering of common and poor people. All the other stake holders heavily criticized Mr. Ali as he
    was suggesting a cut on the profit for the interest of society. Some of the senior managers in that meeting told Mr. Ali that he had no right to think for these ethical considerations in business as he is representing his company as an employee and that there is no such law to regulate the price of the product. Mr. Ali’s point of view was that he was also representing the same interest of his company for increasing the profit but he said that he is also seriously concerned about the low purchasing power of this fuel of the common person and that he will remain firm on this argument. This initiated a new dimension of brainstorming for social responsibility and ethical consideration in that business meeting. Now on one side there was short term strategy to earn high profit and on the other side, there was a long term strategy of
    business growth with ethical consideration.


    Considering the above mentioned scenario, answer the following questions based on the knowledge that you have acquired from this course so far.

    1: In your point of view, what may be the possible reason of prisoner’s dilemma in this situation? How will the ethical consideration help this business sector as a long term growth strategy?

    2: How can Mr. Ali create the spirit of moral reasoning in this interest group? Discuss with logical points.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s