CHAPTER 4.2. Two-By-Two Games 113 4 Conflict We begin in Section 4.2 by introducing "2 x 2 games" (read as "two-by-two games"), so called because the}' involve two parties each of which is choosing one of two available strategies. In Section 4.3 we introduce the notion of dominant strategies and Nash equilibria in this context. Sections 4.4, 4.5, and 4.6 then examine 2x2 games that model three real-world situations: the arms race, the Cuban missile crisis, and the Yom Kippur War. The first model works extremely well, the second moderately well, and the third model fails miserably. Yet, in Section 4.7, we show how to embellish the model of the Yom Kippur War via something called "the theory of moves," and this, indeed, gives a most satisfying analysis. ■ 4.2 TWO-BY-TWO GAMES & 4.1 INTRODUCTION One of the central concepls of political science is conflict, that is, situations where the actions of one individual (or group) both influence and are influenced by those of another. Real-world examples of such conflict situations tend to be enormously complex, and a considerable amount of influential work in political science deals with the analysis of particular conflict situations and the ramifications of literally dozens of subtle influences upon the events that took place. The kind of analysis that we will undertake here, however, is at the other end of the spectrum. Instead of the kind of fine analysis that is very specific to a particular event, we will consider some extremely simple game-theoretic models that provide a very coarse analysis applicable to many different events of historical significance. The justification for this undertaking, however, lies in the extent to which these game-theoretic analyses, coarse though they maybe, nevertheless shed light on why various events unfolded as they did, as well as to explain some of the intractabilities of situations such as the arms race of the 1960s, 1970s, and 1980s. The games that our models will be based on are called "2 x 2 ordinal games." The framework for such a game is as follows: 1. There are two players: Row and Column. 2. Each player has a choice of two alternatives: C (for "cooperate") or N (for "noncooperate"). A choice of an alternative is called a strategy. 3. The play of the gam e consists of a single move: Row and Column simultaneously (and independently) choose one of the two alternatives, C or N. This yields four possible outcomes as displayed in Figure 1. 4. Each player ranks the four possible outcomes according to his or her relative preference. The outcome considered "best" (by, say, Row) is labeled "4" (by Row); second best, "3"; third, "2"; and, finally, the outcome considered worst (still, by Row) is labeled "I" (by Row). f These games are called "ordinal" games since the labels 4,3,2, and 1 for the outcomes reflect only the order of preference as opposed to the (absolute) magnitude of one's preference for any particular outcome. Thus, for example, an outcome (say CN) labeled "4" by Row should not 114 4. CONFLICT Outcome a Outcome b Outcome c Outcome d Ron's Columns Choice Choice c N C N Shorthand j Notation ; cc CN NC NN Figure 1 Column Row's Preference Ranking Rov; Figure 2 Column Column's Preference Ranking Row:: Figure 3 be construed as twice as good (in Row's view) as an outcome labeled "2" by Row. For the sake of illustrating the notation we will use, let us look al a particular 2x2 ordinal game—one that will, in fact, turn out to be important. Describing a 2 x 2 ordinal game means specifying a total of eight things: Row's preference ranking of the four possible outcomes CC, CN, NC, NN, and Column's ranking of the same four possible outcomes. ForThe particular exanTpie-we^airrtouse here, the preference rankings are shown in Figures 2 and 3 above. 4.2. Two-By-Two Games 115 Thus, Row ranks the four outcomes, from best to worst, as NC, CC, NN, CN, and Column ranks the outcomes, from best to worst, as CN, CC, NN, NC. The rectangular arrays we have used to describe Row and Column's preferences .correspond to mathematical objects called "matrices" (plural of "matrix"), more explicitly, "2 x 2 matrices," since each array has two rows (i.e., two horizontal sequences of numerical entries) and two columns (i.e., two vertical sequences of numerical entries). This explains the choice of "Row" and "Column" as names for the players. Notice also that in the 2x2 game described above, both Row and Column prefer the outcome CC to the outcome NN. That is, both assign mutual cooperation (CC) a "3" (second best) and mutual noncooper-ation a "2" (second worst). In particular, a gain for one player is not necessarily a loss for the other. For this reason these games are called "variable-sum" as opposed to "zero-sum." The standard notation for presenting a particular 2x2 ordinal game involves using a single 2x2 matrix to simultaneously present the preference rankings of Row and Column. Each of the four entries in this case involves two numbers: Row's ranking and Column's ranking. Thus, for example, if we consider the upper right hand entry (that is, the one that is simultaneously in the first row and the second column), we find that Row ranks it, in our example, as "1" and Column ranks it as "4." Hence in the single matrix display of both Rows preferences and Column's preferences, we could use something like "1/4" or "(1,4)" as the upper right hand entry as long as we agree that the first number so displayed applies to Row and the second number to Column. We'll opt for the "ordered pair" notation (1,4). Thus, the single 2x2 matrix representing the game described above is shown in Figure 4 below. Column c N c (3, 3) 0,4) N (4,1) (2, 2) Figure 4 116 4. CONFLICT 4.4. Prisoner's Dilemma and the Arms Race 117 There aren't all that many 2x2 ordinal games, and most of those are (and probably will remain) both uninteresting and unimportant. In the rest of this chapter, however, we will concentrate on what are the most well known and probably most interesting of the lot: Prisoner's Dilemma and Chicken. S 4.3 DOMINANT STRATEGIES AND NASH EQUILIBRIA Recall that a strategy (for, say, Row) in a 2 x 2 ordinal game is a choice of C or N. Recall also that an outcome in a 2 x 2 ordinal game is an ordered pair, and that, for example, the outcome (3, 1) would be preferred by Row to the outcome (2, 4). For brevity, we might simply say that (3, 1) is better for Row than (2, 4). The central idea of this section is the following. DEFINITION. The strategy N is said to be dominant for Row in a (particular) 2x2 ordinal game if—regardless of what Column does—it yields an outcome that is better for Row than would have been obtained by Row's use of the strategy C. We could, of course, similarly define the notions of C being dominant for Row, N being dominant for Column, and C being dominant for Column. Illustrations of these concepts will occur in Sections 4.4 and 4.5 where we consider, respectively, Prisoner's Dilemma and Chicken. For the moment, however, we move on to the consideration of the second fundamental idea that will be involved in the analysis of 2 x 2 ordinal games. DEFINITION. An outcome in a 2 x 2 ordinal game is said to be a Wash equilibrium if neither player would gain by unilaterally changing his or her strategy. Our formalization of 2 x 2 ordinal games makes no provision for either player actually changing his or her mind. The game is played by a single simultaneous choice of strategy (C or N), and that's the end of it. There are, howeveirarieasrtwo goodT."eHSOns-forhavmg-theeeneept-of a Nash equilibrium at hand. First, the real world is not static; it is extremely dynamic. Hence, when we set up our models so that an outcome of a 2 x 2 ordinal game corresponds to a real-world event, we'll want to ask about any predictions of events to unfold suggested by the model. Second, we will later formalize this dynamic aspect of the real world, developing models that allow precisely the kind of change in choice of strategy indicated above.. An outcome that is a Nash equilibrium is one that we will think of as being stable: No one wants to upset things—at least, not unilaterally. We should also note that, in game theory, a Nash equilibrium is a set of strategies, not an outcome. With 2x2 ordinal games, however, there is no harm in identifying the outcome with the strategies that lead to it since they are unique. Examples of Nash equilibria will again occur in our presentations of Prisoners Dilemma and Chicken. Nash equilibria, by the way, are named for John Nash, whose remarkable story was told in the book and movie entitled A Beautiful Mind. ■ 4.4 PRISONER'S DILEMMA AND THE ARMS RACE Consider the following (hypothetical) situation. Two suspects are charged with having jointly committed a crime. They are then separated and each is told that both he and his alleged accomplice will be offered the choice between remaining silent (as permitted by the Miranda Decision) or confessing. Each is also told that the following penalties will then be applied: 1. If both choose to remain silent, they will each get a one-year jail term based on a sure-fire conviction on the basis of a lesser charge. 2. If both confess, they will each get a five-year prison sentence. f 3. If one confesses and one remains silent, then the confessor will be regarded as having turned state's evidence and he or she will go free. The other, convicted on the -testimony of the first, will get a ten-year sentence. 118 4. CONFLICT The question of interest then becomes the following. Assume you -i are one of the suspects and your sole interest is in minimizing the ■ length of time you will spend in jail. Do you remain silent or confess? d Intuition may yield a response such as: "I wish I knew what my part-.. i ner is doing." Surprisingly, this is wrong. What your partner is doing " is irrelevant; you should confess. Let's see why this is true. There are 'f two cases to consider. That is, your partner will either remain silent ;? or confess. In the former case (remaining silent), your confession gets ;■ you off scot-free as opposed to the one-year jail term you'd get if you I also remained silent. In the latter case (where he confesses), your con- s fession gets you off with five years as opposed to the ten years you'd 1 get for remaining silent in the face of his confession. Hence, confess- [-\ ing gets you a shorter jail sentence than remaining silent regardless of whether your partner confesses or remains silent. The same reasoning applies to your partner. Thus, rational action : (in terms of self-interest) leads to both you and your partner confessing I and hence serving five years each. What is paradoxical here, however, is the observation that if both of you remained silent, you would serve only one year each and thus both be better off. The above situation lends itself naturally to a description via a 2x2 ordinal game where "cooperation" (C) corresponds to "remaining ; silent" and "noncooperation" (N) to "confessing." (Think of "cooper- ■■: ating" as referring to cooperation with your partner as opposed to cooperation with the D.A.) Then Row, for example, ranks the outcomes :. from best (4) to worst (1) as: 4: NC - Row confesses and Column is silent: Row goes free. 3: CC - Row is silent and Column is silent: Row gets one year. 2: NN - Row confesses and Column confesses: Row gets five years.. J 1: CN - Row is silent and Column confesses: Row gets ten years. Column's ranking is the same for CC and NN, but with "A" and "4" reversed for NC and CN. Hence the 2x2 ordinal game that models this situation is precisely the example from Section 4.2 (duplicated in Figure 5 on the next page in our present hypothetical scenario). Thus, in the hypothetical situation involving ~J"pTisowersa~both •• should choose to confess even though both would benefit if both 4.4, Prisoner's Dilemma and the Arms Race 119 Column . :R6w.' C (silent) N (confess) C (silent) N (confess) (3, 3) (4,1) (1,4) (2, 2) FlGUBE 5 remained silent. The following proposition simply formalizes this in the context of the 2x2 ordinal game Prisoner's Dilemma. PROPOSITION. The strategy N is a dominant strategy for both Row and Column in the game Prisoner's Dilemma. PROOF. We prove that N is dominant for Row; the proof for Column is analogous. Thus, we must show that, regardless of what Column does, N is a better choice for Row than is C. Column can do two things; we consider these separately. Case 1: Coiumn chooses C In this case, Row's choice of N yields an outcome for Row of "4" from (4,1) as opposed to "3" from the outcome (3, 3) that would have resulted from Row's choice of the strategy C. Case 2: Column chooses N In this case, Row's choice of H yields an outcome for Row of "2" from (2, 2) as opposed to "1" from the outcome (1, 4) that would have resulted from Row's choice of the strategy C. Thus, we've shown that, regardless of what Column does (i.e., whether we're in case 1 or case 2), N yields a better outcome for Row than does C. This completes the proof. The paradoxical nature of Prisoner's Dilemma is now at least partially formalized: both Row and Column have dominant strategies leading to a (2, 2) outcome that is strictly worse—for both—than the (3, 3) outcome available via mutual cooperation. Such mutual cooperation could be induced by adding additional structure to the model—threats, repeated plays of the game (see Exercises 27-29), etc.—but in the absence of such things, how does one argue against the use of a dominant strategy? 120 4. CONFLICT The above proposition illustrates how to prove that a given strategy • is dominant for a given player. What it does not illustrate, however, J: is how one finds the strategies (if they exist) that are dominant. With \ a little experience, one can do this just by staring at the preference matrix. A better method, however, is given in the exercises at the end 5 of the chapter. Another paradoxical aspect of Prisoner's Dilemma is the fact that not only does the (2, 2) outcome arise from the use of dominant strategies, but, once arrived at, it is incredibly stable. This stability is formalized . in the following. PROPOSITION. The outcome (2, 2) is a Nash equilibrium in the game Prisoner's Dilemma. PROOF. If Row unilaterally changes from N to C, the outcome would change from (2, 2) to (1, 4) and, in particular, be worse for Row (having gone from "2" to "1" in the first component). Similarly, if Column unilaterally changes from N to C, the outcome would change from (2, 2) to (4, 1) and be worse for Column in exactly the same way as it was for Row (having now gone from "2" to "1" in the second component). This completes the proof. For our purposes, the importance of Prisoners Dilemma is as a simple model of some significant political events. We consider one such example now and several more potential examples in the exercises at the end of the chapter. Our treatment is largely drawn from Brams (1985a); the reader is referred there for more background and analysis. TheU.S.-Sovietarmsraceofthe 1960s, 1970s,and 1980s is a natural candidate for game-theoretic modeling since the actions of both countries certainly influenced and were influenced by those of the other. There is also an intractability here that, at the time, seemed to defy rationality in light of the economic burdens being imposed on both countries. Our goal here is to model the arms race as a simple 2x2 ordinal game (which turns out to be Prisoner's Dilemma) and thus explain some of the intractability as being a consequence of the structure of preferences as opposed to irrationality on Trie part of either country. 4 4. prisoner's Dilemma and the Arms Race 121 The model will be an enormous oversimplification of the real-world situation. It will ignore a number of admittedly important factors (such as the political influence of the military-industrial complex in each country and the economic role played by military expenditures in avoiding recessions) and focus instead on the following underlying precepts: 1. Each country has an option to continue its own military buildup (to arm), or to discontinue the buildup and begin a reduction (to disarm). 2. Both countries realize that the (primarily economic) hardships caused by an arms race make a mutual decision to arm less desirable than a mutual decision to disarm. 3. Each country would prefer military superiority to military parity. (Notice here that, although we are talking about the 1960s, 1970s, and 1980s, this may well have been false by the late 1980s.) Given these (and the obvious least preference for military domination by the other country), we see that each country would rank the four possibilities, from most preferred to least preferred, as follows: 4. Military superiority (via the others unilateral disarmament). 3. Mutual disarmament (parity without economic hardships). 2. An arms race (parity, but with economic hardships). 1. Military inferiority (via its own unilateral disarmament). Thus, if we let the Soviets play the role of "Column" and the U.S. the role of "Row", with "cooperate" (C) corresponding to "disarm" and "noncooperate" (N) corresponding to "arm," the 2x2 ordinal game modeling this situation turns out to be (a relabeled version of) Prisoner's Dilemma (Figure 6 on the nexttpage). Again we see the paradox: Both countries prefer mutual disarmament—the (3, 3) outcome—to an arms race—the (2, 2) outcome; However, both countries have a dominant strategy to arm, and thus individual rationality produces the arms race no one wants. 122 4. CONFLICT Soviet iet Union ^ Disarm Arm Arms Race p-^ Disarm (3,3) (1,4) as Prisoner's (2%? la Dilemma Arm (4,1) (2,2) Figure 6 m 4.5 CHICKEN AND THE CUBAN MISSILE CRISIS The 2 x 2 ordinal game known as "Chicken" is named after the less than inspiring real-world (one would like to think hypothetical) "sport" in ;; which opposing drivers maintain a head-on collision course until at i least one of them swerves out of the way. The one who swerves first loses. Ties can occur. In modeling Chicken as a 2 x 2 ordinal game, we identify the strategy . "swerve" with cooperation, and "don't swerve" with noncooperation. . The difference between Chicken and Prisoner's Dilemma is the interchange of preference "2" and preference "1" for both players. That is, in Prisoner's Dilemma, your least preferred outcome is a combination of cooperation on your part met by noncooperation on the part of your opponent. In Chicken, however, this outcome—although not all that great—is strictly better than mutual noncooperation. The matrix notation for Chicken is shown in Figure 7 on the next page. Notice that the game, like Prisoner's Dilemma, is symmetric (i.e., seen the same way from the point of view of Column or Row). In terms of dominant strategies and Nash equilibria, we have the following:. PROPOSITION. In the game of Chicken, neither Row nor Column has a dominant strategy, but both (2, 4) and (4, 2) are Nash equilibria (and there are no others). PROOF. We.shall begin by showing that C is not a dominant strategy for Row. To do this, we" must produce a scenario"lnlvriicrT"N"yields^ better result for Row than does C. Consider the scenario where Column 4,5. Chicken and the Cuban Missile Crisis 123 Column C (swerve) N (don't swerve) C (swerve) N (don't swerve) (3, 3) (4,2) (2,4) (1,1) Figure 7 chooses C. Then, a choice of N by Row yields {4, 2) and hence "4" for Row while a choice of C by Row yields (3, 3) and hence only "3" for Row. Thus, N is a strictly better strategy for Row than C in this case (i.e., in this scenario), and so C is not a dominant strategy for Row. Similarly, one can prove that N is not a dominant strategy for Row, and that neither C nor N is a dominant strategy for Column. To show that (2, 4) is a Nash equilibrium, we must show that neither player can gain by unilaterally changing his or her strategy. We'll show it for Row; the proof for Column is completely analogous. If Row unilaterally changes from C to N, then the outcome would change from (2, 4) to (1, 1) and, in particular, be worse for Row (having gone from "2" to "1" in the first component). This shows that (2, 4) is a Nash equilibrium. The proof that (4, 2) is a Nash equilibrium is left to the reader, as is the proof that there are no others. Comparing the above proposition with the one in Section 4.4, we see the fundamental difference between Prisoner's Dilemma and Chicken: 1. In Prisoner's Dilemma, both players have a dominant strategy, and so there is an expected (although paradoxically unfortunate) outcome of (2, 2). Moreover, because this outcome is the result of dominant strategies, it is also a Nash equilibrium (see Exercise 11), and thus (intuitively) itable. 2. In Chicken, there is no expected outcome (i.e., no dominant strategies) although (3, 3) certainly suggests itself. This outcome, however, is unstable (not a Nash equilibrium), and only 124 4. CONFLICT a fear of the (1,1) outcome would prevent Row and Column from trying for the (4, 2) and (2, 4) outcomes. Thus, instability and flirtations with noncooperation tend to characterize those real-world situations most amenable to game-theoretic models based on Chicken. In October 1962, the United States and the Soviet Union came closer to a nuclear confrontation than perhaps at any other time in history.. President John F. Kennedy, in retrospect, estimated the probability of nuclear war at this time to be between one-third and one-half. The event that precipitated this crisis was the Soviet installation of medium and intermediate range nuclear missiles in Cuba, and the subsequent detection of this by U.S. intelligence. History now refers to this event as the Cuban missile crisis. The events that actually unfolded ran as follows. By mid-Octobcr 1962, the Central Intelligence Agency had determined that Soviet missiles had been installed in Cuba and were within ten days of being operational. Kennedy convened a high-level executive committee that spent six days in secret meetings to discuss Soviet motives, decide on appropriate U.S. responses, conjecture as to Soviet reaction to U.S. responses, and so on. The final decision of this group was to immediately put in place a naval blockade to prevent further shipments of missiles, while not ruling out the possibility of an invasion of Cuba to get rid of the missiles already there. Khrushchev, on behalf of ihc Soviets, responded by demanding that the United States remove its nuclear missiles from Turkey (a demand later granted—although not publicly—by Kennedy), and promise not to invade Cuba (a demand granted by Kennedy). The Soviets then withdrew all their missiles from . Cuba. Much has been written about the Cuban missile crisis and, game-theoretic models thereof. Our purpose here is to present two of the simplest such models based on the game of Chicken. The first is from Brains (1985a, 1985b). The difference in the two models lies in the specification of alternatives available to the players. It may be that the former model represents more of a U.S. point of view of the situation and the latter more of a Soviet point of view. Figure 8 on the next page presents the^formeE--------------------------------- 4.5. Chicken and the Cuban Missile Crisis 125 Soviet Union] Cuban Missile p^^-Crisis as Chicken I us- Withdraw missiles Maintain missiles Blockade (3, 3) (2, 4) Airstrike (4,2) (1,1) Figure 8 It should be pointed out that Brams (1985a, 1985b) embellishes the model in several ways (e.g., by consideration of deception, threats, sequential nature of the events, etc.), as well as considering a different ranking of the alternatives by the players. The actual Soviet motives for the installation of the missiles in the first place are apparently still not known, although the fear of a U.S. invasion of Cuba may well have played a role. For more on this, see Brams (1993). If we accept this as a primary issue in the minds of the Soviets, then the game (especially as perceived by the Soviets) may have been as in Figure 9 below. Notice that the underlying 2x2 ordinal game is again Chicken. Thus, in both models, the structure of the underlying game sheds light on the tensions of these dramatic times in the early 1960s. Withdraw missiles Maintain missiles Give up option to invade Cuba Invade Cuba (3,3) t (4,2) (2,4) (1,1) Figure 9 126 4. CONFLICT 4.6. The Yom Kippur War 127 ■ 4.6 THE YOM KIPPUR WAR In October 1973, the Yom Kippur War pitted Israel against a combination of Egyptian and Syrian forces. Israel quickly gained the upper hand, at which point the Soviet Union made it known that it was seriously considering intervening on behalf of Egypt and Syria. The Soviets also made it known that they hoped the United States would cooperate in what they referred to as a peace initiative. On the other hand, they were certainly aware of the U.S. option to frustrate this Soviet initiative by coming to the aid of Israel. The above situation, again in very simplistic terms, suggests a 2x2 ordinal game model (Figure 10 below), where the rankings of preferences have not been filled in yet. The question now becomes: How did the Soviet Union and the United States rank the different outcomes, and was each aware of the other's preferences? Histoiy suggests that the Soviets were convinced the preferences were as shown in Figure 11 on the next page. Notice that this is not Prisoner's Dilemma since the United States is ranking CN ahead of NN. (That is, if the Soviets choose N, the United States would rather choose C than N.) Why would the Soviets think the United States would not respond to Soviet intervention by intervention of its own? The answer here seems to lie with the U.S. political situation at home at this time. The Watergate scandal was creating what was perceived as a "crisis of confidence" in the U.S. political Seek Supply Egypt diplomatic and Syria with solution military aid C N Cooperate with the Soviet initiative C (nonintervention) Frustrate the Soviet initiative N (intervention) |^Soviets~| Cooperate with the Soviet initiative (nonintervention) Frustrate the Soviet initiative (intervention) Seek Supply Egypt diplomatic and Syria with solution military aid C N (3,3) (4,1) (2, 4) (1,2) Figure 11 arena. Hence, the Soviets thought that a decision to give military aid to Egypt and Syria would not be met with an appropriate response from the United States. President Nixon, however, realized exactly how the Soviets perceived the situation, and the consequences of this perception (see Exercise 1 at the end of the chapter). Hence, his immediate goal became that of convincing the Soviets that the correct model was, in fact, Prisoner's Dilemma as shown in Figure 12 below. Nixon's method of accomplishing this was to place the U.S. forces on worldwide alert—one of only about half a dozen times that nuclear threats have been employed by the United States. This move (since Seek Supply Egypt diplomatic and Syria with solution military aid C mm Cooperate with the Soviet initiative (nonintervention) Frustrate the Soviet initiative (intervention) N (3,3) 1 (4,1) (1,4) (2,2) Figure 10 Figure 12 128 4. CONFLICT i characterized by then-Secretary of State Henry Kissinger as a "delib-erate overreaction") seems to have been effective in convincing the ; Soviets that Prisoner's Dilemma was, in fact, the correct model for "I U.S. and Soviet preferences in this situation. The astute reader may well be asking the following question: Did ; Nixon actually gain anything by convincing the Soviets that the game ' was Prisoner's Dilemma? This is the issue we take up in the next section. ■ 4.7 THE THEORY OF STOVES Recall that our basic 2 x 2-ordinal games are played by a single simul- ^ taneous choice of strategy (C or N) by both players. An outcome is :jg then decided, and that's the end of it. In particular, as game-theoretic models of real-world situations, these 2x2 games are about as simple r as one could hope for. The price paid for this simplicity, however, is a loss of the dynamics found in the real world. ;; As a particular example of the kind of loss referred to above, let's ; return to the considerations in Section 4.6 and the Yom Kippur War. i Recall that Nixon placed U.S. forces on worldwide alert in order to .; convince the Soviets that the game being played was really Prisoner's ,3 Dilemma. But now, let's face up to a fundamental difficulty with this ■* game-theoretic model of that particular conflict: 2 It simply doesn't work. In what sense does the above model fail to work? The answer: It j fails to explain what actually happened. That is. the existence of dom- ■'■< inant strategies—for intervention in this case—should have resulted ; in mutual noncooperation between the United States and the Soviet ■'■] Union. But, in fact, neither chose to intervene and so we wound up at the (3, 3) outcome (which is also unstable in the sense of not being a Nash equilibrium). What is wrong with our model, and can it be: modified to more faithfully reflect reality? The most obvious__place to look for a shortcoming in the Prisoner's Dilemma model of intervention in the Yom Kippur Warts u iii- u:c i Tn..i ;r -^i-nt-mKlv not where the 4.7. The Theory of Moves 129 Dilemma model ot mieive.muo ,.....~ ~~ ^ preference rankings we assigned. But this is probably not where problem lies in this particular case. The problem is even more basic than the choice of preferences. There is a very fundamental way in which a 2 x 2-ordinal game differs from the situation in which the United States and the Soviet Union found themselves in 1973. This difference rests in what we might call the "starting position." In a 2 x 2-ordinal game, the starting position is completely neutral—neither C nor N has any predetermined favored status. But in the real-world situation of the Yom Kippur War, the starting position was clearly one of mutual nonintervention. Hence, the United States and the Soviet Union were already at the (3, 3) outcome and the question was whether or not either side should change its status quo strategy of C (nonintetvention) to N (intervention). From this point of view, the game certainly does start to explain the events that unfolded. That is, the (3, 3) outcome is definitely not stable {i.e., not a Nash equilibrium) and so it is certainly rational for each side to try to find out the resolve of the other with respect to responding to a switch from C to N. This, of course, is exactly what the Soviets did, and Nixon's response was designed to convey a very exact message regarding this resolve. Thus, a better way to use the 2 x 2-ordinal preference matrix in modeling this particular situation is to consider a new kind of game where a starting position is determined in some way, and then each player has the option of changing strategy. This leads us directly to (a slightly modified version of) the so-called theory of moves introduced by Brams and Wittman (1981) and extensively pursued in Brams (1994). The precise definitions and rules are as follows. To each 2 x 2-ordinal preference matrix (like that for Chicken or Prisoners Dilemma) we associate two "sequential games"—one in which Row goes first and one in which Column goes first. We'll describe the former version; the latter is completely analogous. Suppose we have a fixed 2 x 2-ordinal preference matrix. The "sequential game" (with Row going first) proceeds as follows. Step 1: Both players make an initial simultaneous choice of either C or N. This determines what wd will call an initial position of the game. Step 2: Row has the choice of leaving things as they are ("staying"), or changing his strategy. Step 3: Column has the same choices as did Row in step 2. 130 4. CONFLICT 4,7. The Theory of Moves 131 They continue alternately. The game ends if any one of the following situations occurs: 1. It is Column's turn to move and the position of the game is (-, 4); 2. It is Row's (second or later) turn and the position of the game is (4, -). Thus, if the initial position is (4, -), the game does not immediately end. 3. Either Row or Column chooses to stay, with the one exception to this being that an initial "stay" by Row does not end the game; we give Column a chance to move even if Row declines the chance to switch strategy on his first move. Notice that the effect of rules 1 and 2 is to build a bit of rationality into the rules of the game. This guarantees that certain games will terminate and thus be susceptible to the kind of tree analysis we want to do. The outcome at which a game ends is called the final outcome and this (alone) determines the payoffs. The analogue of a Nash equilibrium in the present context is given by the following. DEFINITION. An outcome is called a non-myopic equilibrium when Row goes first if sequential rational play in the game described above results in that outcome being the final outcome any time it is chosen as the initial position. The notion of a "non-myopic equilibrium when Column goes first" is defined similarly. DEFINITION. A non-myopic equilibrium is an outcome that is both a non-myopic equilibrium when Row goes first and a non-myopic equilibrium when Column goes first. We will analyze rational play in the kind of sequential game described above by what is called "backward induction" or, more informally, "pruning theTfee7"This so-call^'game^tree^analysfs'' beginsTit any point in the tree where the next move will definitely end the game (according to the rules). Assuming the player about to make this final move is rational—meaning that he will choose, of the two possible final outcomes resulting from his move, the one that is better for him—we can eliminate from consideration (and from the tree) the potential move that will be rejected by this player. The result of our eliminating this move is a smaller tree that nevertheless represents the same game (assuming, as we are, that players are rational). Continuing this "pruning" eventually reveals the optimal sequence of moves that would be chosen by rational players. We will illustrate backward induction in the sequential version of Prisoners Dilemma where Row goes first. It will turn out, in fact, that both the (2, 2) and (3, 3) outcomes are non-myopic equilibria. The (3, 3) outcome, however, is the result of a dominant strategy in the theory of moves version of Prisoner's Dilemma. Our method of analyzing the theory of moves version of Prisoners Dilemma will be to consider separately the four possible initial positions in the game. For each, we'll do a game-tree analysis and find the corresponding final outcome. This will immediately show that (2, 2) and (3, 3) are the only non-myopic equilibria. Further analysis at this point will then yield the additional claim about the dominant strategy. Recall that Row is going first. Case 1: The Initial Position is (3, 3) in Prisoner's Dilemma The tree of possibilities, displayed in Figure 13 on the next page, is constructed in the following way from the 2x2 ordinal game (which is also reproduced in the small box within Figure 13). 1. The top node is the initial position, which is (3, 3) in this case. 2. Row gets to move first and has a choice between staying at (3, 3) or switching strategies from "C" to "N" and thus moving the position of the game to (4, I). This explains the two nodes labeled "stay at (3, 3)" and "(4, 1)" on the level of the tree just below the top (3, 3) node. Notice that to the far left of these two nodes is the word Row, indicating that the choice between these two is being made by Row. 132 4. CONFLICT 4.7. The Theory of Moves 133 (3, 3) ma Col Col stay -at (3,3) stay ^ at (3,3; stay at (1.4) rRow,| C N C (3,3) (1,4) N (4,1) (2,2) (1,4) stay ' at (2, 2) (2, 2) (2,2) (4,1) Row stays with his 4 (1,4) Col stays with his 4 Figure 13 3. Cokimn gets to move next. Recall that even if Row chooses to stay on the initial move, the game does not end. Thus, if Row chooses to stay at (3, 3), Column could also stay, ending the game at a final outcome of (3, 3), or Column could switch his strategy from "C" to "N" and thus move the position of the game : from (3, 3) to (1, 4). Similarly, if Row had moved the game to (4, 1) on his first move, Column would have a choice between . staying there, and ending the game at a final outcome of (4, 1), or switching strategies from "C" to "N" and thus moving the position of the game from (4, 1) to (2, 2). 4. Column and Row thus continue to alternate moves. Notice that Row is controlling the "vertical movement among outcomes" and Column is controlling "horizontal movement among outcomes." 5. Notice also that the game is finite, since the position of the game becomes (1, 4) at a time when it is Column's turn to move (thus guaranteeing a stay at his "4" by Column according to the rules) and the position of the game becomes (4, 1) at a time when it is Rows turn to mover (Not ail games like thls^lu^ftniie^^see-Exercise 5.) (3, 3) Col stays with his 4 C N c (3, 3) (1,4) N (4,1) (2,2) Row stays with his 4 Figure 14 For the game-tree analysis of rational play we start at the bottommost nodes and work our way up the tree, transferring outcomes labels up and "X-ing out" the position of the game that will not be passed through on the way to the final outcome. This is illustrated in Figure 14 above. Note, for example, that starting at the lower left part of the tree, Column has a choice between staying at (2,2) or moving to (4,1) where Row will definitely stay. Since Column prefers the "2" from "(2, 2)" to the "1" from "(4, 1)", the option to move will be rejected as is indicated by the "railroad tracks." Moving one level higher on that same side of the tree, we see that Row has a choice between staying at (1, 4) and getting his worst outcome, or moving to (2, 2) which will turn out to be the final outcome. Clearly he does the latter and so we "X-out" the edge leading to "stay at (1, 4)" and we replace the temporary (1,4) label by the (2, 2) that we now know will be the final outcome if the game reaches this position. Conclusion The game-tree analysis from Figure 14 shows that rational play dictates an initial choice to stay at (3, 3) by Row, followed by Column's choice to also stay and to thus let (3, 3) be the final outcome as well as the initial position. Hence, (3, 3) is a non-myopic equilibrium when Row goes first. 134 4. CONFLICT (2,2) Col Stay at (4,1) (2, 2) Bow stays with his 4 (1,4) Col stays with his 4 N (3,3) (1,4) (4,1) (2,2) Figure 15 For the three remaining cases, we will present only the analogues of Figure 14 and the conclusions they yield. Case 2: The Initial Position is (2, 2) in Prisoner's Dilemma Conclusion The outcome (2,2) is a non-myopic equilibrium when Row goes first (Figure 15 above). In fact, with (2, 2) as the initial position, rational play dictates that Row will choose to stay as will Column. Case 3: The Initial Position is (1, 4) in Prisoner's Dilemma Conclusion If the initial position is (1, 4) in Prisoner's Dilemma, then Row will switch strategies, thus moving the outcome to (2, 2) (Figure 16 on the next page). Column will then choose to stay and the game will end at (2, 2). Intuitively, this says that if Column is being aggressive and Row is not, then Row will respond to this by also being aggressive and that's where things will stay. Case 4: The Initial Position is (4,1) in Prisoner's Dilemma Conclusion If the initial position is (4, 1) in Prisoners Dilemma, then Row will switch strategies ancT thus move The^utconre~to~(3',—3)-(Figure 17, next page). Column will then choose to stay. Intuitively, 4.7. The Theory of Moves 135 (V*)\ (2. 2) Row Col , Ifjjfl stay at (1,4) Col stays with his 4 (2, 2) stay at (2,2) (4,1) Row stays with his 4 N (3.3) (1,4) R1) (2,2) Figure 16 RoW C N c (3, 3) (1,4) N C,1) (2,2) stay at (2, 2) with his 4 Figure 17 if Row is being aggressive and Column is not, then Row realizes that it he does not back off to a nonaggressive stance, then Column will become aggressive and the (2, 2) stalcmab (3, 3) compromise. iVtll prevail instead of the 136 4. CONFLICT Exercises 137 The following table summarizes the theory of moves in Prisoners Dilemma for the play where Row goes first. Initial Positon (3, 3) (2, 2) (1,4) (4, 1) Final Outcome (3,3) (2,2) (2.2) (3.3) Notice that both Row and Column want (3, 3) as a final outcome instead of (2, 2). Thus, both want either (3, 3) or (4, 1) as the initial position. However—and this is a crucial observation—Column alone can guarantee this simply by choosing C as his initial strategy. Then, if Row chooses C we start at (3, 3) and if Row chooses N, we start at (4, 1). Thus, Column has a dominant strategy of "C." Although the above analysis has been for the case where Row goes first, it is now easy to see what happens when Column goes first. That is, the game is symmetric. Thus, if we were to go through the corresponding analysis in the latter case, we'd similarly find that (3, 3) and (2, 2) are non-myopic equilibria when Column goes first and that the (3,3) final outcome occurs as the result of a dominant strategy of initial cooperation, this time by Row. In particular, we can now drop the phrase when Row goes first, and simply conclude that (3, 3) and (2, 2)' are non-myopic equilibria, and that (3, 3) arises as a final outcome as the result of a dominant strategy of initial cooperation on the part of whichever player is not getting to move first. Before concluding this section, let's return to the Yom Kippur War and consider the sequential version of Prisoner's Dilemma provided by the theory of moves as a potential model for the events that unfolded at that time. Given that the initial position was clearly one of mutual nonintervention—the (3, 3) outcome in our model—then the model accurately predicts exactly what happened. That is, neither side elected to change its initial choice of strategy. Notice that since (3, 3) is a non-myopic equilibrium, the question of whether the United States or the Soviet Union is "designated" as going first doesn't arise. However, it seems clear that in the analysis of the situation by both sides, the Soviets were more-lifcely-t©-play-th4SJ%>le.__________ ■ 4.8 CONCLUSIONS In this chapter, we've introduced 2x2 ordinal games in general as well as the two .most interesting examples of such. The first—Prisoner's Dilemma—is one in which both players (independently) have dominant strategies leading to a (2, 2) outcome that both consider inferior to the (3, 3) outcome that is available. The (2, 2) outcome also turns out to be stable in the sense of being a Nash equilibrium (where neither player can gain by unilaterally changing his or her strategy). We also presented in this chapter the classic application of Prisoner's Dilemma as a model of the U.S.-Soviet arms race of the 1960s, 1970s, and 1980s. The second 2x2 ordinal game introduced in this chapter is Chicken. This game is quite different from Prisoner's Dilemma in the sense that Prisoner's Dilemma has an expected, although paradoxically unfortunate, (2, 2) outcome, while there are no dominant strategies in Chicken, although (2, 4) and (4, 2) are stable outcomes (arrived at only by flirting with the disastrous (1,1) outcome). As an application of Chicken, we constructed two different models of the Cuban missile crisis. The difference between these models is in the choice of strategies available to the two players. We also considered the Yom Kippur War, and observed that the naive 2x2 ordinal game-theoretic model simply did not work in the sense of predicting what actually took place. With this in mind, we turned, in Section 4.7, to a more complicated game involving the so-called theory of moves. In particular, the theory of moves explains why an initial position of mutual cooperation on a Prisoner's Dilemma game board will persist, even when both sides have the opportunity to (alternately) change strategies. Exercises 1. Suppose Row ranks the four possible'outcomes, from best to worst, in a 2 x 2 ordinal game as CN, CC, NC, NN and Column ranks the four, again from best to worst, as CC, NN, NC, CN. (a) Set up the 2 x 2 matrix (as in Figure 2 in Section 4.2) giving Row's preference ranking. 4. CONFLICT 138 (b) Set up the 2 x 2 matrix (as in Figure 3 in Section 4.2) giving Column's preference ranking. (c) Express all this information in a single 2x2 matrix (as in Figure 4 in Section 4.2). 2. Write out the proof that N is a dominant strategy for Column in Prisoner's Dilemma. 3. Show that C is a dominant strategy for (a) Row and (b) Column in the following game. Exercises 139 Row | (1,3) (2,1) In the following 2 x 2 ordinal game: (a) Show that C is not a dominant strategy for Row. b) Show that N is not a dominant strategy for Row. c Show that C is not a dominant strategy for Column. d Show that N is not a dominant strategy for Column. Column C N C (2,3) (3,1) N (4,2) (1,4) 5 in this chapter and in the exercises so far, we have dealt wrth the. ^ ^ to nrove that a given strategy is dominant in a part cu a fx 2 orsi Rrspoiise For this particular game, we can see that if Column chooses C, then the outcome will be either (3, 1) or (1, 4), and Row would certainly prefer the "3" from (3, 1) to the "1" from (1, 4). Thus, Row's best response to a choice of C by Column is C, since this is what yields the outcome (3, 1). A similar analysis when Column chooses N shows that Row's best response is also C in this case. Thus, the rest of the chart can be filled out as follows: Columns ChoJco Row's Best Response C(because 3 > 1) C (because 4 > 2) From this we can conclude that Row has a dominant strategy of C. Notice, however, that the above chart is a poor excuse for a proof that C is a dominant strategy for Row. That is, a proof is a convincing argument, and the above chart conveys little to anyone 140 4. CONFLICT Exercises 141 6. who does not already understand the material. On the other hand, the chart (together with the preference matrix) should make it easy for the reader to: (a) Write down a proof (with sentences as in the proof for Prisoner's Dilemma from Section 4.4) that C is a dominant strategy for Row in the above game. Fill out the following chart (which is the analogue for Column of what we just did for Row): (b) Ro.v s Cho'ieu I Columr i Best Rebponsnj (c) Use what you found from the chart in part (b) to prove that Column has no dominant strategy. (This should look like the proof for Chicken in Section 4.5.) Notice that in filling out these charts, there are four possibilities for what can occur below the "Best Response" label: C C N N C N C N In the first case, C is a dominant strategy, and in the last case, N is a dominant strategy. In the second case, the optimal strategy suggested is called "tit-for-tat." In the third case, it is called "tat-for-tit." Find the dominant strategies in the following game and prove that they are, in fact, dominant. Column 1 C N c (2,1) 0,2) N (4, 3) (3, 4) 7. Determine if there are any dominant strategies in the following game. Column 9. 10. Row (2, 4) (4,1) (3,2) (1,3) Extend what is done in Exercise 5 to answer the following: Does Column have a dominant strategy in the following 2x3 game where Column has three choices: C, l\l, and V? (Intuition: think of V as very uncooperative.) Each ranks the six possible outcomes from 6 (best) to 1 (worst). Column Ron (5, 4) (3, 5) (2, 6) (S, 1) {4, 2) (1, 3) Suppose that CC is a (4, 4) outcome in a 2 x 2 ordinal game. Does this guarantee that C is a dominant strategy for both Row and Column? (Either explain why it does, or find a 2 x 2 ordinal game showing that it need not.) In the following game: , (a) Show that (2, 3) is a Nash equilibrium. (b) Show that (4, 2) is not a Nash equilibrium. .(c) Is (3. .4) a Nash equilibrium? (Why or why not?) (d) Is (1,1) a Nash equilibrium? (Why or why not?) r 4. CONFLICT 142 13. Exercises ±A. Consider the following game: 143 C N C N C (2,3) (4, 2) C (2,3) (4,2) |~Ftow j N (1,1) (3,4) N (1,1) (3,4) 11 Suppose that Row and Column both have dominant strategies in a ' 2 x 2 ordinal game. Explain why the result of these strategies (used simultaneously) is a Nash equilibrium. 12. Consider the following game: C N c (2,2) (3,3) N (1,4) (4,1) (a) Prove that Row has no dominant strategy. (b) Prove that Column has no dominant strategy. (c) Prove that this game has no Nash equilibrium. Consider a two-player game in which the players simultaneously show a penny, either heads up or tails up. If both players show heads, then both players lose their pennies to a lucky third party, and if both players show tails, each player keeps his or her own penny. If both players show different sides, then the player who shows heads gets both coins. (a) (b) (c) (d) Write down the two-by-two matrix for this game. Is Chicken or Prisoner's Dilemma or neither a model for this game? Do the players have a dominant strategy? Is there a Nash equilibrium? (a) Prove that Row has a dominant strategy. What is it? (b) Prove that Column has no dominant strategy. (c) Are there any Nash equilibria? 15. In long distance cycling races, drafting is a frequent phenomenon. When one cyclist rides behind someone else, the wind resistance is cut, and it is much easier to pedal; experts suggest that the cyclist in back can save between 20% and 40% of his energy during the race. Top cycling teams often use this strategy; the team players take turns riding in front of the team leader who then has a better chance of winning the race. Suppose that two friends enter a cycling race, and at one point near the end of the race, the two cyclists find themselves a good distance ahead of the rest of the group. Their energy is lagging, and if both riders continue to work alone, the rest of the pack will soon catch up, and neither will win. If the two take turns drafting, then they will remain ahead of the pack for awhile; it's possible that one of the two will win, but it's more likely that they will both tire enough that someone else passes them in the end. If either cyclist pulls just ahead of his friend, however, allowing him to draft the rest of the race, then the two will remain ahead of the pack, and the cyclist in back will certainly have the energy to pull ahead in the last leg and win. Each cyclist would prefer to win the race, but would rather see his friend win than a stranger. Model this scenario with a 2 x 2 ordinal game, and determine what, if anything, the model predicts will happen. Is the game Prisoner's Dilemma, Chicken, or neither? 16. Kathryn and Nadia each plan to throw a New Year's Eve party; each one has a back-up date as well, and ^he two back-up dates do not conflict. Ideally, Kathryn hopes that she can throw the New Year's Eve party, and that Nadia will choose a different date. But if that doesn't happen, she really wants to be able to attend Nadia's party, even though she'll be very jealous if Nadia's party is on New Year's and she has to choose a different date. 3 44 4, CONFLICT (a) if Nadia feels the same way as Kathryn, write down a 2 x 2 ordinal game that models the situation. What, if anything, does the model predict will happen? (b) Suppose that Nadia's first priority is that her party is on New Year's Eve, and would absolutely hate it if Kathryn gets to throw the New Year's Eve party and she is forced to choose a different date. Write down a new 2x2 ordinal game that models the situation. What, if anything, does the model predict will happen? (c) For both scenarios above, is the game Prisoner's Dilemma, Chicken, or neither? 17. Consider the following hypothetical situation. NASA plans to launch a manned vehicle into space, but the engineers feel that it is unsafe. NASA has the options to launch or not, and the engineers have the option to go public with their reservations or not. Assume that NASA's first priority is that the engineers remain silent (because NASA honestly feels that they are wrong), and, as a second priority, NASA would rather launch than not launch. Additionally, assume that the engineers have a first priority of preventing the launch, and a second priority of going public with their reservations. Model this as a 2 x 2 game, and, in a few sentences, explain what outcome is predicted by the existence of dominant strategies. 18. Suppose there are two colleges, both competing for the same group of students (all of whom will go to one of the two colleges). Suppose that each college knows that if one offers merit scholarships and the other doesn't, then the one that does will enroll more of the better students and more than justify the expense. However, if both offer merit scholarships, it will be costly and have no effect on which students enroll where. Model this as a 2 x 2 game, and, in a few sentences, explain what outcome is predicted by the existence of dominant strategies. 19. Do there exist 2x2 ordinal games with a Nash equilibrium that is not the result of dominant strategies by Row and Column? Give an example or prove that one does not exist. Exercises 145 20. (This requires extending what was in the text.) Find all Nash equilibria in the following 3x3 game: |- Column _] c N V C (1,9) (4, 2) (7,7) N (3,4) 0, 3) (5,1) V (6, 5) (2, 6) (8,8) 21. Find all Nash equilibria for the following 3x3 game, and for each outcome that is not a Nash equilibrium, explain why it is not. c N V c (1,4) (2,5) (3,3) N (4,8) (5,9) (6,2) V (7,5) (8,7) (9,1) 22. Consider the Democratic primaries prior to the 2008 presidential election. Assume that Hillary Clinton and Barack Obama had a choice of waging an aggressive (negative) campaign directed at the other's weaknesses, or waging a positive campaign based on their own strengths. Assume also that each felt that negative campaigning, unless answered in kind, would be advantageous to the one doing the negative campaigning, at least as far as the primaries are concerned. Notice, however, that mutual negative campaigning will certainly put the Democratic party in a worse position for the general election than mutual positive/campaigning, (a) Assuming that each candidate is more concerned with his or her own political success than doing what is best for the party, model this as a 2x2 game and discuss what this suggests as far as rational behavior on the part of the candidates. 146 4. CONFLICT Exercises 147 (b) How does your model change if we assume that each candidate has the party's best interests in mind? 23. In Puccini's opera Tosca, the main characters are the beautiful Tosca, her lover Cavaradossi, and Scarpia, the chief of police. Scarpia has condemned Cavaradossi to death, but offers to spare his life (by arranging to have blanks in the guns of the firing squad) in exchange for Tosca's favors. Tosca agrees and a meeting between . her and Scarpia is set (which—exercising mathematical license— we shall assume is for the same time as the execution). Tosca thus has a choice between submitting as agreed or double-crossing Scarpia (perhaps by not showing up; perhaps in some other way). Scarpia has a choice between arranging for the blanks as agreed or double-crossing Tosca by not doing so. Tosca considers having her lover spared to be more important than the issue of whether she submits or not, even though—other things being equal—she would rather not submit. Scarpia considers having Tosca submit to be more important than the issue of whether Cavaradossi is executed or not, even though—other things being equal—Scarpia would rather have him killed. (a) Model this as a 2 x 2 ordinal game and then determine what, if anything, the model predicts will happen. (b) Find out what happened in the opera and see if your predictions are correct. 24. The following report appeared in The Daily Gazette (Schenectady, NY, Sept. 25, 1993): OPEC's high oil output and falling prices have cost member countries about $6 billion since the spring and some countries continue to exceed production limits, the cartel said. One day ahead of a crucial meeting on Saturday, the Organization of Petroleum Exporting Countries and its dozen members were pumping about a million barrels above the ceiling of 23.6 million barrels. To better understand this, let's consider a hypothetical version of OPEC consisting of six countries. Assume that as the number of bar- rels of oil produced by OPEC per day increases, the price decreases according to the following table (which is also hypothetical): Barrels per day produced (in millions) ' Resulting price per barrel (in dollars) 24 25 26 27 28 29 30 24 23 22 21 20 19 18 Suppose OPEC agrees that each of the six countries will produce four million barrels per day, even though each country has the ability to produce five million barrels per day at no additional cost to itself. Suppose also that if anyone violates the agreement, no one will know who did (but everyone will know how many countries did because of the resulting price per barrel). Assume you are the leader of one of the six OPEC countries and you are only concerned with financial gain for your country. You have to decide whether to produce four miliion barrels per day or five million barrels per day. (a) The number of OPEC countries, other than yours, who produce five million barrels per day instead of four million could be 0, 1, 2, 3, 4, or 5. For each of these six cases, determine if your country is better off financially producing five million barrels per day or four million barrels per day. (b) Still assuming your only concern is immediate financial gain for your country, what does (a) indicate you should do and how compelling is this indication? (c) If all six countries care only about their own immediate financial gain, what does (b) suggest will happen? (d) Given what you said in(c), how does your country fare financially compared to how it would do if everyone (including you) stuck to the original agreement? (e) In a well-written paragraph ortwo, discuss howthis hypothetical scenario is similar in spirit to something that arose in our study of 2 x 2 ordinal games. 25. In 1960 William Newcomb, a physiojst, posed the following problem: Suppose there are two boxes labeled A and B. You have a choice between taking box B alone or taking both A and B. God ...... has definitely placed $1,000 in box A, In box B, He placed either $1,000,000 or nothing, depending upon whether He knew you'd 148 4. CONFLICT Exercises 149 take box B alone (in which case He placed $1,000,000 in box B) or take both (in which case He placed nothing in box B). The question is: Do you take box B alone or do you take both? You can answer this if you want to, but that's not the point of this exercise. In fact,, hundreds of philosophical papers have been written on this problem. Most people think the answer is obvious, although they tend to split quite evenly on which answer is obvious and which answer is clearly wrong. (a) Give an argument that suggests you should take both boxes. (b) Give an argument that suggests you should take box B alone. (c) Indicate which argument you find most compelling and why. (d) Consider the following 2x2 ordinal game: Column C N C (3,4) (1,3) N (4,1) (2,2) Prove that Row has a dominant strategy of N. Now suppose that we change the rules of the game so that Row chooses first, and then Column—knowing what Row did—chooses second. Explain why, even though Row has a dominant strategy of N in the game with the usual rules, Row should choose C in this version of the game where Row moves first. Here is our resolution of Newcomb's problem. (There are hundreds of "resolutions" in the literature; the reader should take the authors' with the grain of salt it probably deserves.) Consider the following 2x2 ordinal game between God and us. God has two choices: to put $1,000 in box A and $1,000,000 in box B, or to put $1,000 in box A and $0 in box B. We also have two choices: take both boxes or take box B alone. Our ranking of the outcomes is clear, since the dollar amounts we receive for the four possible outcomes are $1,001,000; $1,000,000; $1,000; and $0. God, on the other hand, apparently regards the upper left outcome as better than thejjppewight-^utame^re^ greedy). Similarly, He would seem to regard the lower right outcome as better than the lower left outcome (punishing us for our greed). God | $1,000 in A $1,000 in A $1,000,000 in B $0 in B Choose box B alone (3, a*) (1,a) Us] Choose both boxes (4,b) (2, b*) Notice that this game, assuming only that a+ is greater than a and b+ is greater than b, has the same property as the game in part (d): We have a dominant strategy of "choose both" in the usual play of the game, but, in the game where we must move first, we are better off not using this strategy. This is the paradoxical nature of God's action being based on His knowledge of what we will do: Which game is being played—the one where we go first (and if He knows what we will do, surely this is equivalent to our already having done it), or the one where we move independently (as in the usual play of a 2 x 2 ordinal game)? 26. A two-player game is said to be a somewfiat finite game if every play of the game ends after finitely many moves. "Hypergame" was created by William Zwicker in the late 1970s. It is played by two players as follows: The first move consists of Player 1 naming a somewhat finite game of his or her choice. The second move in this play of hypergame consists of Player 2 making a legitimate first move in the somewhat finite game named in move 1. Player 1 now makes a second move in the game named, and they continue to alternate until this play of the game named is completed. (In some ways, hypergame is like dealer's choice poker.) (a) Write down a compelling argument that hypergame is a somewhat finite game. (b) Write down a compelling argument that hypergame is not a somewhat finite game. More on hypergame is readily available in Zwicker (1987). 27. Iterated Prisoner's Dilemma is a two-player game in which two players play the Prisoner's Dilemma game a fixed finite number N of -.......times. (a) Determine each player's strategy when N = 2. 150 4. CONFLICT (b) Determine each player's strategy when N = 3. (c) Explain why each player's strategy remains the same no matter how large N is. 28. Robert Axelrod, a political scientist, organized a tournament in which participants played an iterated version of Prisoner's Dilemma, that is, the game is played a certain number of times, and the players may base their strategies in one round on their opponent's behavior in the previous round. The player who wins the most rounds is the winner. Some possible strategies are as follows. Pure Cooperation. The player cooperates during every iteration of the game. Pure Non-Cooperation. The player does not cooperate during every iteration of the game. Random. The player flips a coin for every iteration of the game: if heads comes up, he cooperates, and if tails comes up, he does not cooperate. Alternation. The player cooperates during the first round and in every other odd-numbered round, and does not cooperate in all even-numbered rounds. Tit-for-Tat. The player cooperates during the first round of play. During all other rounds, the player uses the strategy that his opponent used during the previous round. (a) Suppose that two players play a 5-round Iterated Prisoner's ' Dilemma, and both use the Tit-for-Tat strategy. Describe the outcome of the game, that is, who wins during each of the five stages. (b) Suppose that two players play a 5-round Iterated Prisoner's Dilemma; Player 1 uses the Pure Non-Cooperation strategy, and Player 2 uses the Tit-for-Tat strategy. Discuss the outcome of the game. (c) Suppose that two players play a 5-round Iterated Prisoner's Dilemma; Player 1 uses the Alternation strategy, and Player 2 uses the Tit-for-Tat strategy. Discuss the outcome of the game. (d) Suppose that you are playing a 5-round Iterated Prisoner's Dilemma, and you know your opponent will use Tit-for-Tat strategy. What should you do at each stage of the game? 29. How might aplayer^^trategyjEarJtejateg^ if infinitely many rounds are played? Exercises 151 30 31. 32. 33. 34. 35. The ultimatum game is a two-player game, played as follows: Player 1 proposes a possible division of $1 between the two players (for example, they might split the $1 evenly between them). Only divisions requiring quarters (no dimes, nickels, or pennies) are allowed. Player 2 has two options: she can either accept the division and the dollar is split as proposed, or she can reject the division in which case neither player receives anything. (a) Assuming each player just wants to maximize his profit, what is Player's 2 dominant strategy? What about Player 1? (b) in practice, a large percentage of the people in Player l's role offer a near 50-50 split. Compare this to your results in part (a). How might you explain this difference? Suppose the Soviets think that the correct model of the Yom Kippur War is the one in Figure 11 in Section 4.6. Based on this model, what would the Soviets expect to happen? In a few sentences each, explain the steps in the analysis pictured in Figures 15, 16, and 17 in Section 4.7. Show that (3, 3) is a non-myopic equilibrium in the theory of moves version of Chicken. Do an analysis of the theory of moves version of Chicken that is analogous to what was done for Prisoner's Dilemma. Show that the theory of moves version of the following game is not finite. Assume that (2, 3) is the initial position and Row goes first. Column J Row (2, 3) (3, 1) (4, 2) (1, 4)