Monday, 6 May 2013

TEACHING NOTES ON THEORIES AND STORIES OF GAME THEORY


TEACHING NOTES ON THEORIES AND STORIES OF GAME THEORY
THEORY OF GAME THEORY from Pindyck and Rubinfield
A game is any situation in which players (the participants) make strategic decisions — i.e. decisions that take into account each other's actions and responses.
Strategic decisions result in payoffs to the players: outcomes that generate rewards or benefits.
A key objective of game theory is to determine the optimal strategy for each player. A strategy is a rule or plan of action for playing the game.
The optimal strategy for a player is the one that maximizes her expected payoff.
We will focus on games involving players who are rational, in the sense that' they think through the consequences of their actions. In essence, we are concerned with the following question: If I believe that my competitors are rational and act to maximize their own payoffs, how should I take their behaviour into account when making my decisions?
The economic games that firms play can be either cooperative or non-co-operative. In a cooperative game, players can negotiate binding contracts that allow them to plan joint strategies. In a non-cooperative game, negotiation and enforcement of binding contracts are not possible.
Consider the following game devised by Martin Shubik. A dollar bill is auctioned, but in an unusual way. The highest bidder receives the dollar in return for the amount bid. However, the second-highest bidder must also hand over the amount that he or she bid — and get nothing in return.
In some experiments, the "winning" bidder has ended up paying more than $3 for the dollar! This happens because players make the mistake of failing to think through the likely response of the other players and the sequence of events it implies.
Martin Shubik, Game Theory in the Social Sciences (Cambridge, MA: MIT Press, 1982)
How can we decide on the be strategy for playing a game? How can we deter­mine a game's likely outcome?
We begin with the concept of a dominant strategy - one that is optimal no matter what an opponent does.
The following example illustrates this in a duopoly setting. Suppose Firms A and B sell competing products and are deciding whether to undertake advertising campaigns. Each firm will be affected by its competitor's decision. The possible outcomes of the game are illustrated by the payoff matrix.
Observe that if both firms advertise, Firm A will earn a profit of 10 and Firm B a profit of 5. If Firm A advertises and Firm B does not, Firm A will earn 15 and Firm B zero. The table also shows the outcomes for the other two possibilities.
                                                              
Table 1.1          Payoff Matrix for Advertising Game


Firm B


Advertise
Don’t advertise




Firm A
Advertise
10, 5
15, 0
Don’t advertise
6, 8
10, 2




What strategy should each firm choose? First consider Firm A. It should clearly advertise because no matter what firm B does, Firm A does best by advertising. If Firm B advertises, A earns a profit of 10 if it advertises but only 6 if it does not. If B does not advertise, A earns 15 if it advertises but only 10 if it doesn't. Thus advertising is a dominant strategy for Firm A. The same is true for Firm B: No matter what firm A does, Firm B does best by advertising. Therefore, assuming that both firms are rational, we know that the outcome for this game is that both firms will advertise. This outcome is easy to determine because both firms have dominant strategies.
When every player has a dominant strategy, we call the outcome of the game an equilibrium in dominant strategies.
Unfortunately, not every game has a dominant strategy for each player. To see this, let's change our advertising example slightly. The payoff matrix in Table 2 is the same as in Table 1 except for the bottom right-hand corner— if neither firm advertises, Firm B will again earn a profit of 2, but Firm A will earn a profit of 20. (Perhaps Firm A's ads are expensive and largely designed to refute Firm B's claims, so by not advertising, Firm A can reduce its expenses considerably.)
Table 1.2          Modified Advertising Game

Firm B

Advertise
Don’t advertise



Advertise
10, 5
15, 0
Don’t advertise
6, 8
20, 2




Now Firm A has no dominant strategy. Its 'optimal decision depends on what Firm B does. It Firm B advertises, Firm A does best by advertising; but if Firm B does not advertise, Firm A also does best by not advertising. Now suppose both firms must make their decisions at the same time. What should Firm A do?
To answer this, Firm A must put itself in Firm B's shoes. What decision is best from Firm B's point of view, and what is Firm B likely to do? The answer is clear: Firm B has a dominant strategy—advertise, no matter what Firm A does. (If Firm A advertises, B earns 5 by advertising and 0 by not advertising; if A doesn't advertise, B earns 8 if it advertises and 2 if it doesn't.) Therefore, Firm A can con­clude that Firm B will advertise. This means that Firm A should advertise (and thereby earn 10 instead of 6). The logical outcome of the game is that both firms will advertise because Firm A is doing the best it can given Firm B's decision; and Firm B is doing the best it can given Firm A's decision.
A Nash equilibrium is a set of strategies (or actions) such that each player is doing the best it can given the actions of its opponents. Because each player has no incentive to deviate from its Nash strategy, the strategies are stable.
In the example shown in Table 2, the Nash equilibrium is that both firms advertise. Given the decision of its competitor, each firm is satisfied that it has made the best decision possible, and so has no incentive to change its decision.
It is helpful to compare the concept of a Nash equilibrium with that of un equilibrium in dominant strategies:
Dominant Strategies:   I'm doing the best I can no matter what you do.
You're doing the best you can no matter what I do.
Nash Equilibrium:          I'm doing the best I can given what you are doing.
You're doing the best you can given what I am doing.
Note that a dominant strategy equilibrium is a special case of a Nash equilibrium, i In the advertising game of Table 2, there is a single Nash equilibrium I both firms advertise. In general, a game need not have a single Nash equilib­rium. Sometimes there is no Nash equilibrium, and sometimes there are several (i.e., several sets of strategies are stable and self-enforcing). A few more exam­ples will help to clarify this.
The Product Choice Problem Consider the following "product choice” problem. Two breakfast cereal companies face a market in which two new variations of cereal can be successfully introduced—provided that each variation is introduced by only one firm. There is a market for a new "crispy" cereal and a market for a new "sweet" cereal, but each firm has the resources to introduce only one new product. The payoff matrix for the
Table 1.3          Product Choice Problem


Firm 2



Crispy
Sweet






Firm 1
Crispy
-5, -5
10, 10

Sweet
10, 10
-5, -5







two firms might look like the one in Table 3.

In this game, each firm is indifferent about which product it produces—so long as it does not introduce the same product as its competitor. If coordination were possible, the firms would probably agree to divide the market. But what if the firms must behave non-cooperatively? Suppose that somehow—perhaps through a news release—Firm 1 indicates that it is about to introduce the sweet cereal, and that Firm 2 (after hearing this) announces its plan to introduce the crispy one. Given the action that it believes its opponent to be taking, neither firm has an incentive to deviate from its proposed action. If it takes the proposed action, its payoff is 10, but if it deviates—and its opponent's action remains unchanged—its payoff will be -5. Therefore, the strategy set given by the bottom left-hand corner of the payoff matrix is stable and constitutes a Nash equilibrium: Given the strategy of its opponent, each firm is doing the best it can and has no incentive to deviate.
Note that the upper right-hand corner of the payoff matrix is also a Nash equilibrium, which might occur if Firm 1 indicated that it was about to produce the crispy cereal. Each Nash equilibrium is stable because once the strategies are chosen, no player will unilaterally deviate from them. However, without more information, we have no way of knowing which equilibrium (crispy/sweet vs. sweet/crispy) is likely to result—or if either will result. Of course, both firms have a strong incentive to reach one of the two Nash equilibria—if they both introduce the same type of cereal, they will both lose money. The fact that the two firms are not allowed to collude does not mean that they will not reach a Nash equilibrium. As an industry develops, understandings often evolve as firms "signal" each other about the paths the industry is to take.
Beach Location Game Suppose that you (Y) and a competitor (C) plan to sell soft drinks on a beach in Goa this summer. The beach is 200 meters long, and sunbathers are spread evenly across its length. You and your competitor sell the same soft drinks at the same prices, so customers will walk to the closest vendor. Where on the beach will you locate, and where do you think your competitor will locate?
If you think about this for a minute, you will see that the only Nash equilib­rium calls for both you and your competitor to locate at the same spot in the cen­ter of the beach. To see why, suppose your competitor located at some other point (A), which is three quarters of the way to the end of the beach. In that case, you would no longer want to locate in the center; you would locate near your competitor, just to the left. You would thus capture nearly three-fourths of all sales, while your competitor got only the remaining fourth. This outcome is not an equilibrium because your competitor would then want to move to the center of the beach, and you would do the same.
The "beach location game" can help us understand a variety of phenomena. Have you ever noticed how, along a two- or three-mile stretch of road, two or^ three gas stations or several car dealerships will be located close to each other?
Maximin strategies
The concept of a Nash equilibrium relies heavily on individual rationality. Each player's choice of strategy depends not only on its own rationality, but also on the rationality of its opponent. This can be a limitation, as the example in Table 4 shows.
Table 1.4          Maximum Strategy


Firm 2



Don’t invest
Invest






Firm 1
Don’t invest
0, 0
-10, 10

Invest
-100, 0
20, 10







In this game, two firms compete in selling file-encryption software. Because both firms use the same encryption standard, files encrypted by one firm's soft­ware can be read by the other's —an advantage for consumers. Nonetheless, Firm t has a much larger market share. (It entered the market earlier and its soft­ware has a better user interface.) Both firms are now considering an investment in a new encryption standard.
Note that investing is a dominant strategy for Firm 2 because by doing so it will do better regardless of what Firm 1 does. Thus Firm 1 should expect Firm 2 to invest. In this case, Firm 1 would also do better by investing (and earning Rs 20 crore) than by not investing (and losing Rs 10 crore). Clearly the outcome (invest, invest) is a Nash equilibrium for this game, and you can verify that it is the o{y Nash equilibrium. But note that Firm 1-'s manager had better bb sure that Firm 2's managers understand the game and are rational. If Firm 2 should happen to make a mistake and fail to invest, it would be extremely costly to Firm 1. (Consumer confusion over incompatible standards would arise, and Firm 1, with its dominant market share, would lose Rs 100 crore.)
If you were Firm 1, what would you do? If you tend to be cautious—and if you are concerned that the managers of Firm 2 might not be fully informed or rational—you might choose to play "don't invest." In that case, the worst that can happen is that you will lose Rs 10 crore; you no longer have a chance of losing Rs 100 crore. This strategy is called a maximin strategy because it maximizes the minimum gain that can be earned. If both firms used maximin strate­gies, the outcome would be that Firm 1 does not invest and Firm 2 does. A maximin strategy is conservative, but it is not profit-maximizing. (Firm 1, for example, loses Rs 10 crore rather than earning Rs 20 crore.) Note that if Firm 1 knew for certain that Firm 2 was using a maximin strategy, it would prefer to invest (and earn Rs 20 crore) instead of following its own maximin strategy of not investing.
Maximizing the Expected Payoff If Firm 1 is unsure about what Firm 2 will do but can assign probabilities to each feasible action for Firm 2, it could instead use a strategy that maximizes its expected payoff. Suppose, for example, that Firm 1 thinks that there is only a 10-percent chance that Firm 2 will not invest. In that case, Firm l's expected payoff from investing is (0.1X-100) + (0.9X20) = Rs 8 crore. Its expected payoff if it doesn't invest is (0.1)(0) + (0.9X-10) = -Rs 9 crore. In this case, Firm 1 should invest.
On the other hand, suppose Firm 1 thinks that the probability that Firm 2 will not invest is 30 percent. Then Firm l's expected payoff from investing is (0.3X-100) + (0.7X20) = -Rs 16 crore, while its expected payoff from not investing is (0.3)(0) + (0.7X-10) = -Rs 7 crore. Thus Firm 1 will choose not to invest.
You can see that Firm l's strategy depends critically on its assessment of the probabilities of different actions by Firm 2. Determining these probabili­ties may seem like a tall order. However, firms often face uncertainty (over market conditions, future costs, and the behavior of competitors), and must make the best decisions they can based on probability assessments and expected values.
Mixed Strategies
In all of the games that we have examined so far, we have considered strategies in which players make a specific choice or take a specific action: advertise or don't advertise, set a price of Rs 4 or a price of Rs 6, and so on. Strategies of this kind are called Pure strategies. There are games, however, in which a pure strat­egy is not the best way to play.
Matching Pennies An example is the game of "Matching pennies”. In this game, each player chooses heads or tails and the two players reveal their coins at the same time. If the coins match (i.e., both are heads or both are tails), Player A wins and receives a rupee from Player B. If the coins do not match, Player B wins and receives a rupee from Player A. The payoff matrix is shown in Table 6.
Table 1.6          Matching Pennies

Player B

Heads
Tails



Heads
1, -1
-1, 1
Tails
-1, 1
1, -1




Note that there is no Nash equilibrium in pure strategies for this game. Suppose, for example, that Player A chose the strategy of playing heads. "Then Player B would wan! to play tails. But if Player B plays tails, Player A would also want to play tails. No combination of heads or tails leaves both players satisfied—one player or the other will always want to change strategies.
Although there is no Nash equilibrium in pure strategies, there is a Nash equilibrium in mixed strategies: strategies in which players make random choice among two or more possible actions, based on sets of chosen probabilities. In this game, for example, Player A might simply flip the coin, thereby playing heads with probability 1/2 and playing tails with probability 1/2. In fact, if Player A follows this strategy a1d Player B does the same, we will have a Nash equilib­rium: Both players will be doing the best they can given what the opponent is doing. Note that although the outcome is random, the expected payoff is 0 for each player.
It may seem strange to play a game by choosing actions randomly. But put yourself in the position of Player A and, think what would happen if you, fol­lowed a strategy other than just flipping the coin. Suppose you decided to play heads. If Player B knows this, she would play tails and you would lose. Even if Player B didn't know your strategy, if the game were played repeatedly, she could eventually discern your pattern of play and choose a strategy that coun­tered it. Of course, you would then want to change your strategy— which is why this would not be a Nash equilibrium. Only ii you and your opponent both choose heads or tails randomly with probability 1/2 would neither of you have any incentive to change strategies. (You can check that the use of different probabilities, say 3/4 for heads and T/ for tails, does not generate a Nash equilibrium.)
One reason to consider mixed strategies is that some games (such as "Matching Pennies") do not have any Nash equilibria in pure strategies. It can be shown, however, that once we allow for mixed strategies, every game has at least one Nash equilibrium. Mixed strategies, therefore, provide solutions to games when pure strategies fail. Of course, whether solutions involving mixed strategies are reasonable will depend on the particular game and players. Mixed strategies are likely to be very reasonable for "Matching Pennies," poker, and other such games. A firm, on the other hand, might not find it reasonable to believe that its competitor will set its price randomly.
The Battle of the Sexes Some games have Nash equilibria both in pure strate­gies and in mixed strategies. An example is "The Battle of the Sexes," a game that you might find familiar. It goes like this. Raman and Radhika would like to spend Saturday night together but have different tastes in entertainment. Raman would like to go to the opera, but Radhika prefers movie. As the payoff matrix in Table 12.7 shows, Raman would most prefer to go to the opera with Radhika, but prefers watching movie with Radhika to going to the opera alone, and similarly for Radhika.


Raman



Movie
Opera






Radhika
Movie
2, 1
0, 0

Opera
0, 0
1, 2








First, note that there are two Nash equilibria in pure strategies for this game—the one in which Raman and Radhika both watch movie, and the one in which they both go to the opera. Radhika, of course, would prefer the first of these outcomes and Raman the second, but both outcomes are equilibria — neither Raman nor Radhika would want to change his or her decision, given the decision of the other.
This game also has an equilibrium in mixed strategies: Radhika chooses movie with probability 2/3 and opera with probability 1/3, and Raman chooses movie with probability 1/3 and opera with probability 2/3. You can check that if Radhika uses this strategy, Radhika cannot do better with any other strategy, and vice versa. The outcome is random, and Raman and Radhika will each have an expected payoff of 2/3.
In real life, firms play repeated games: Actions are taken and payoffs received over and over again.
Robert Axelrod, The Evolution of Cooperation (New York: Basic Books, 1984)
Infinitely Repeated Game Suppose the game is infinitely repeated. In other words, my competitor and I repeatedly set prices month after month, forever. Cooperative behavior (i.e., charging a high price) is then the rational response to a tit-for-tat strategy. (This assumes that my competitor knows, or can figure out, that I am using a tit-for-tat strategy.
With infinite repetition of the game, the expected gains from coopera­tion will outweigh those from undercutting. This will be true even if the probabil­ity that I am playing tit-for-tat (and so will continue cooperating) is small.
Now suppose the game is repeated a finite number of times—say, N months. (N can be large as long as it is finite.) If my competitor (Firm 2) is rational and believes that I am rational, he will reason as follows: "Because Firm 1 is playing tit-for-tat, I (Firm 2) cannot undercut—that is, until the last month. I should undercut the last month because then I can make a large profit that month, and afterward the game is over, so Firm 1 cannot retaliate. Therefore, I will charge a high price until the last month, and then I will charge a low price."
However, since I (Firm 1) have also figured this out, I also plan to charge a low price in the last month. Of course, Firm 2 can figure this out as well, and therefore knows that I will charge a low price in the last month. But then what about the next-to-last month? Because there will be no cooperation in the last month, anyway, Firm 2 figures that it should undercut and charge a low price in the next-to-last month. But, of course, I have figured this out too, so I also plan to charge a low price in the next-to-last month. And because the same reasoning applies to each preceding month, the game unravels: The only rational outcome is for both of us to charge a low price every month.
Tit-for-Tat in Practice Since most of us do not expect to live forever, the unravelling argument would seem to make the tit-for-tat strategy of little value, leaving us stuck in the prisoners' dilemma. In practice, however, tit-for-tat can sometimes work and cooperation can prevail. There are two primary reasons.
First, most managers don't know how long they will be competing with their rivals, and this also serves to make cooperative behavior a good strategy. If the end point of the repeated game is unknown, the unravelling argument that begins with a clear expectation of undercutting in the last month no longer applies. As with an infinitely repeated game, it will be rational to play tit-for-tat.
Second, my competitor might have some doubt about the extent of my ratio­nality. Suppose my competitor thinks (and he need not be certain) that I am play­ing tit-for-tat. He also thinks that perhaps I am playing tit-for-tat "blindly," or with limited rationality, in the sense that I have failed to work out the logical implications of a finite time horizon as discussed above. My competitor thinks, for example, that perhaps I have not figured out that he will undercut me in the last month, so that I should also charge a low price in the last month, and so on. "Perhaps," thinks my competitor, "Firm 1 will Play tit-for-tat blindly, charging a high price as long as I charge a high price." Then (if the time horizon is long enough), it is rational for my competitor to maintain a high price until the last month (when he will undercut me).
Note that we have stressed the word perhaps. My competitor need not be sure that I am playing tit-for-tat “blindly” or even that I am playing tit-for-tat at all. Just the possibility can make cooperative behavior a good strategy (until near the end) if the time horizon is long enough. Although my competitor's conjecture about how I am playing the game might be wrong, cooperative behavior is profitable in expected value terms. With a long time horizon, the sum of current and future profits, weighted by the probability that the conjecture is correct, can exceed the sum of profits from price competition, even if my competitor is the first to undercut. After all, if I am wrong and my competitor charges a low price, I can shift my strategy at the cost of only one period's profit—a minor cost in light of the substantial profit that I can make if we both choose to set a high price.
Table 1.5          Prisoners’ Dilemma


Prisoner B



Confess
Don’t confess






Prisoner A
Confess
-5, -5
-1, -10

Don’t confess
-10, -1
-2, -2







Thus, in a repeated game, the prisoners' dilemma can have a cooperative outcome. In most markets, the game is in fact repeated over a long and uncertain length of time, and managers have doubts about how "perfectly rationally" they and their competitors operate. As a result, in some industries, particularly those in which only in few firms compete over a long period under stable demand and cost conditions, cooperation prevails, even though no contractual arrangements are made. (The water meter industry, discussed below, is an example.) In many other industries, however, there is little or no cooperative behavior.
Sometimes cooperation breaks down or never begins because there are too many firms.
More often, failure to cooperate is the result of rapidly shifting demand or cost conditions. Uncertainties about demand or costs make it difficult for the firms to reach an implicit understanding of what cooperation should entail. (Remember that an explicit understanding, arrived at through meetings and discussions, could lead to an antitrust violation.) Suppose, for example, that cost differences or different beliefs about demand lead one firm to conclude that cooperation means charging Rs 50 while a second firm thinks it means Rs 40. If the second firm charges Rs 40, the first firm might view that as a grab for market share and respond in tit-for-tat fashion with a Rs 35 price. A price war could then develop.

In most of the games we have discussed so far, both players move at the same time.
In sequential games, players move in turn.
There are many other examples: an advertising decision by one firm and the response by its competitor; entry-deterring investment by an incumbent firm and the decision whether to enter the market by a potential competitor; or a new government regulatory policy and the investment and output response of the regulated firms.
As a simple example, let's return to the product choice problem first discussed in Section 12.3.
This time, let's change the payoff matrix slightly.
The new sweet cereal will inevitably be a better seller than the new crispy cereal, earning a profit of 20 rather than 10 (perhaps because consumers prefer sweet things to crispy things). Both new cereals will still be profitable, however, as long as each is introduced by  only one firm.
Suppose that both firms, in ignorance of each other's intentions, must announce their decisions- independently and simultaneously. In that case, both s will probably introduce the sweet cereal—and both will lose money.
Now suppose that Firm 1 can gear up its production faster and introduce in new cereal first. We now have a sequential game: Firm 1 introduces a new cereal, and then Firm 2 introduces one. What will be the outcome of this game? When making its decision, Firm 1 must consider the rational response of its competitor. It knows that whichever cereal it introduces, Firm 2 will introduce the other kind. Thus it will introduce the sweet cereal, knowing that Firm 2 will respond by introducing the crispy one.
Although this outcome can be deduced from the payoff matrix in Table 12.9, sequential games are sometimes easier to visualize if we represent the possi­ble moves in the form of a decision tree. This representation is called the extensive form of a game. It shows the possible choices of Firm 1 (introduce a crispy or a sweet cereal) and the possi­ble responses of Firm 2 to each of those choices. The resulting payoffs are given at the end of each branch. For example, if Firm 1 produces a crispy cereal and Firm 2 responds by also producing a crispy cereal, each firm will have a payoff of -5.
In this product-choice game, there is a clear advantage to moving first: By intro­ducing the sweet cereal. Firm 1 leaves Firm 2 little choice but to introduce the crispy one.
What actions can a firm take to gain advantage in the market place? For example, how might a firm deter entry by potential competitors, or induce existing competitors to raise prices reduce output, or leave the market altogether?
Making a commitment—constraining its future behavior—is crucial. To see why, suppose that the first mover (Firm 1) could later change its mind in response to what Firm 2 does. What would happen? Clearly, Firm 2 would produce a large output. Why? Because it knows that Firm 1 will respond by reducing the output that it first announced. The only way that Firm 1 can gain a first-mover advantage is by committing itself. In effect, Firm 1 constrains Firm 2's behavior by constraining its own behaviour.
The idea of constraining your own behavior to gain an advantage may seem paradoxical, but we will soon see that it is not. Let's consider a few examples. First, let's return once more to the product-choice problem shown in Table 12.9.


Firm 2



Crispy
Sweet






Firm 1
Crispy
10, 10
100, -50

Sweet
-50, 100
50, 50







The firm that introduces its new breakfast cereal first will do best. But which firm will introduce its cereal first? Even if both firms require the same amount of time to gear up production, each has an incentive to commit itself first to the sweet cereal. The key word is commit If Firm 1 simply announces it will produce the sweet cereal, Firm 2 will have little reason to believe it. After all, Firm 2, knowing the incentives, can make the same announcement louder and more vociferously, Firm 1 must constrain its own behavior in some way that convinces Firm 2 that Firm t has no choice but to produce the sweet cereal. Firm 1 might launch an expensive advertising campaign describing the new sweet cereal well before its introduction, thereby putting its reputation on the line. Firm 1 might also sign a contract for the forward delivery of a large quantity of sugar (and make the con­tract public, or at least send a copy to Firm 2). The idea is for Firm 7 to commit itself to produce the sweet cereal. Commitment is a strategic move that will induce Firm 2 to make the decision that Firm 1 wants it to make—namely, to produce the crispy cereal.
Why can't Firm 1 simply threaten Firm 2, vowing to produce the sweet cereal even if Firm 2 does the same? Because Firm 2 has little reason to believe the threat—and can make the same threat itself. A threat is useful only if it is credi­ble. The following example should help make this clear.
Empty Threats
Suppose Firm 1 produces personal computers that can be used both as word processors and to do other tasks. Firm 2 produces only dedicated word proces­sors. As the payoff matrix in Table 11 shows, as long as Firm 1 charges a high price for its computers, both firms can make a good deal of money. Even if Firm 2 charges a low price for its word processors, many people will still buy Firm l's computers (because they can do so many other things), although some buyers will be induced by the price differential to buy the dedicated word processor instead. However, if Firm 1 charges a low price, Firm 2 will also have to charge a low price (or else make zero profit), and the profit of both firms will be signif­icantly reduced.
Table 1.11        Pricing of computers and word processors


Firm 2



High Price
Low Price






Firm 1
High price
100, 80
80, 100

Low price
20, 0
10, 20








Firm 1 would prefer the outcome in the upper left-hand corner of the matrix. For Firm 2, however, charging a low price is clearly a dominant strategy. Thus the outcome in the upper right-hand corner will prevail (no matter which firm sets its price first).
Firm 1 would probably be viewed as the "dominant" firm in this industry because its pricing actions will have the greatest impact on overall industry profits. Can Firm 1 induce Firm 2 to charge a high price by threatening to charge a low price if Firm 2 charges a low price? No, as the payoff matrix in Table 12.11 makes clear: Whatever Firm 2 does, Firm 1 will be much worse off if it charges a low price. As a result, its threat is not credible.
Commitment and Credibility
Sometimes firms can make threats credible. To see how, consider the following example. Race Car Motors, Inc., produces cars, and Far Out Engines, Ltd., pro­duces specialty car engines. Far Out Engines sells most of its engines to Race Car Motors, and a few to a limited outside market. Nonetheless, it depends heavily on Race Car Motors and makes its production decisions in response to Race Car's production plans.
Table 1.12(a)    Production choice problem


Race Car Motors



Small cars
Big cars






Far Out Engines
Small engines
10, 10
100, -50

Big engines
-50, 100
50, 50






We thus have a sequential game in which Race Car is the "leader." It will decide what kind of cars to build, and Far Out Engines will then decide what kind of engines to produce. The payoff matrix in Table 12.12(a) shows the possi­ble outcomes of this game. (Profits are in hundred crore of rupees.) Observe that Race Car will do best by deciding to produce small cars. It knows that in response to this decision, Far Out will produce small engines, most of which Race Car will then buy. As a result, Far Out will make Rs 300 crore and Race Car Rs 600 crore.
Far Out, however, would much prefer the outcome in the lower right-hand cor­ner of the payoff matrix. If it could produce big engines, and if Race Car produced big cars and thus bought the big engines, it would make Rs 800 crore. (Race Car, however, would make only Rs 300 crore.) Can Far Out induce Race Car to pro­duce big cars instead of small ones?
Suppose Far Out threatens to produce big engines no matter what Race Car does; suppose, too, that no other engine producer can easily satisfy the needs of Race Car. If Race Car believed Far Out's threat, it would produce big cars: Otherwise, it would have trouble finding engines for its small cars and would earn only Rs 100 crore instead of Rs 300 crore. But the threat is not credible: once Race Car responded by announcing its intentions to produce small cars, Far out would have no incentive to carry out its threat.
Far out can make its threat credible by visibly and irreversibly reducing some of its own payoffs in the matrix, thereby constraining its own choices. In particular; Far Out must reduce its profits from small engines (the payoffs in the top row of the matrix). It might do this by shutting down or destroying some of its small engine production capacity. This would result in the payoff matrix shown in Table 12(b). Now Race car knows that whatever kind of car it produces, Far out will produce big engines.  If Race Car produces the small cars, Far out will sell the big engines as best it can to other car producers and settle for making only Rs 100 crore' But this is better than making no profits producing small engines. Because Race Car will have to look elsewhere for engines, its profit will also be lower (Rs 100 crore). Now it is clearly in Race Car's interest to produce large cars. By taking an action that seemingly puts itself at a disadvantage, Far out has improved its outcome in the game.
Table 1.12(b)    Modified Production Choice Problem


Race Car Motors



Small cars
Big cars






Far Out Engines
Small engines
0, 6
0, 0

Big engines
1, 1
8, 3








Although strategic commitments of this kind can be effective, they are risky and depend heavily on having accurate knowledge of the payoff matrix and the industry. Suppose, for example, that Far out commits itself to producing big engines but is surprised to find that another firm can produce small engines at a low cost. The commitment may then lead Far Out to bankruptcy rather than continued high profits.
The Role of Reputation Developing the right kind of reputation can also give one a strategic advantage' Again, consider Far Out Engines' desire to produce big engine for Race Car Motors' big cars suppose that the managers of Far out Engines develop a reputation for being irrational — perhaps downright crazy. They threaten to produce big- engines no matter what Race Car Motors does (refer to Table 12a). Now the threat might be credible without any further action; after all' you can't be sure that an irrational manager will always make a profit-maximizing decision. In gaming situations, the party that is known (or thought) to be a little crazy can have a significant advantage.
Developing a reputation can be an especially important strategy in a repeated game. A firm might find it advantageous to behave irrationally for several plays of the game. This might give it a "reputation that will allow it to increase its long-run profits substantially. Bargaining Strategy
Our discussion of commitment and credibility also applies to bargaining problems. The outcome of a bargaining situation can depend on the ability of either side to take an action that alters its relative bargaining position.

Table 1.13        Production Decision


Firm 2



Produce A
Produce B






Firm 1
Produce A
40, 5
50, 50

Produce B
60, 40
5, 45







For example, consider two firms that are each planning to introduce one of two products which are complementary goods. As the payoff matrix in Table 13 shows, Firm 1 has a cost advantage over Firm 2 in producing A. Therefore, if both firms produce A, Firm 1 can maintain a lower price and earn a higher profit. Similarly, Firm 2 has a cost advantage over Firm 1 in producing product B. If the two firms could agree about who will produce what, the rational outcome would be the one in the upper right-hand corner: Firm 1 produces A, Firm 2 produces B, and both firms make profits of 50. Indeed, even without cooperation, this outcome will result whether Firm 1 or Firm 2 moves first or both firms move simultane­ously. Why? Because producing B is a dominant strategy for Firm 2, so (A, B) is the only Nash equilibrium.
Firm 1, of course, would prefer the outcome in the lower left-hand corner of the payoff matrix. But in the context of this limited set of decisions, it cannot achieve that outcome. Suppose, however, that Firms 1 and 2 are also bargaining over a second issue—whether to join a research consortium that a third firm is trying to form. Table 12.14 shows the payoff matrix for this decision problem. Clearly, the dominant strategy is for both firms to enter the consortium, thereby increasing profits to 40.
Now suppose that Firm 1 links the two bargaining problems by announcing that it will join the consortium only if Firm 2 agrees to produce product A. In this case, it is indeed in Firm 2's interest to produce A (with Firm 1 producing B) in return for Firm l's participation in the consortium. This example illustrates how combining issues in a bargaining agenda can sometimes benefit one side at the other's expense.
As another example, consider bargaining over the price of a house. Suppose I, as a potential buyer, do not want to pay more than Rs 2,00,000 for a house that is actually worth Rs 2,50,000 to me. The seller is willing to part with the house at any price above Rs 1,80,000 but would like to receive the highest price she can. If I am the only bidder for the house, how can I make the seller think that I will walk away rather than pay more than Rs 2,00,000?
I might declare that I will never, ever pay more than Rs 2,00,000 for the house. But is such a promise credible? It may be if the seller knows that I have a reputa­tion for toughness and that I have never reneged on a promise of this sort. But suppose I have no such reputation. Then the seller knows that I have every incentive to make the promise (making it costs nothing) but little incentive to keep it. (This will probably be our only business transaction together.) As a result, this promise by itself is not likely to improve my bargaining position.
The promise can work, however, if it is combined with an action that gives it credibility. Such an action must reduce my flexibility—limit my options—so that I have no choice but to keep the promise. One possibility would be to make an enforceable bet with a third party—for example, "If I pay more than Rs 2,00,000 for that house, I'll pay you Rs 60,000." Alternatively, if I am buying the house on behalf of my company, the company might insist on authorization by the Board of Directors for a price above Rs 2,00,000, and announce that the board will not meet again for several months. In both cases, my promise becomes credible because I have destroyed my ability to break it. The result is less flexibility—and more bargaining power.
EXAMPLE  12.3
Wal-Mart Stores' Preemptive Investment Strategy
Wal-Mart Stores, Inc., is an enormously successful chain of discount retail stores in the United States started by Sam Walton in 1969.n Its success was unusual in the industry. During the 1960s and 1970s, rapid expansion by existing firms and the entry and expansion of new firms made discount retailing increasingly competitive. During the 1970s and 1980s, industry-wide profits fell, and large discount chains—including such giants as King's, Korvette's, Mammoth Mart, W. T. Grant, and Woolco—went bankrupt. Wal-Mart Stores, however, kept on growing and became even more profitable. By the end of 1985, Sam Walton was one of the richest people in the United States.
How did Wal-Mart Stores succeed where others failed? The key was Wal-Mart's expansion strategy. To charge less than ordinary department stores and small retail stores, discount stores rely on size, no frills, and high inventory turnover. Through the 1960s, the conventional wisdom held that a discount store could succeed only in a city with a population of 100,000 or more. Sam Walton disagreed and decided to open his stores in small Southwestern towns; by 1970, there were 30 Wal-Mart stores in small towns in Arkansas, Missouri, and Oklahoma. The stores succeeded because Wal-Mart had created 30 "local monopolies." Discount stores that had opened in larger towns and cities were competing with other discount stores, which drove down prices and profit mar­gins. These small towns, however, had room for only one discount operation. Wal-Mart could undercut the non-discount retailers and never had to worry that another discount store would open and compete with it.
By the mid-1970s, other discount chains realized that Wal-Mart had a prof­itable strategy: Open a store in a small town that could support only one discount store and enjoy a local monopoly. There are a lot of small towns in the United States, so the issue became who would get to each town first. Wal-Mart now found itself in a preemption game of the sort illustrated by the payoff matrix in Table 12.15. As the matrix shows, if Wal-Mart enters a town but Company X does not, Wal-Mart will make 20 and Company X will make 0. Similarly, if Wal-Mart doesn't enter but Company X does, Wal-Mart makes 0 and Company X makes 20. But if Wal-Mart and Company X both enter, they both lose 10.
This game has two Nash equilibria—the lower left-hand corner and the upper right-hand corner. Which equilibrium results depend on who moves first. If Wal-Mart moves first, it can enter, knowing that the rational response of Company X will be not to enter, so that Wal-Mart will be assured of earning 20. The trick, therefore, is to preempt—to set up stores in other small towns quickly, before Company X (or Company Y or Z) can do so. That is exactly what Wal-Mart did. By 1986, it had 1009 stores in operation and was earning an annual profit of $450 million. And while other discount chains were going under, Wal-Mart continued to grow. By 1999, Wal-Mart had become the world's largest retailer, with 2454 stores in the United States and another 729 stores in the rest of the world, and had annual sales of $138 billion.
In recent years, Wal-Mart has continued to preempt other retailers by opening new discount stores, warehouse stores (such as Sam's Club), and combination dis­count and grocery stores (Wal-Mart Supercenters) all over the world. Wal-Mart has been especially aggressive in applying its preemption strategy in other coun­tries. As of 2007, Wal-Mart had about 3800 stores in the United States and about 2800 stores throughout Europe, Latin America, and Asia. Wal-Mart had also become the world's largest private employer, employing more than 1.6 million people worldwide.
Table 1.15        The Discount Store Preemption Game


Company X



Enter
Don’t enter






Wal-Mart
Enter
10, 10
10, 20

Don’t enter
20, 10
40, 40







Barriers to entry, which are an important source of monopoly power and profits, sometimes arise naturally. For example, economies of scale, patents and licenses, or access to critical inputs can create entry barriers. However, firms themselves can sometimes deter entry by potential competitors.
To deter entry, the incumbent firm must convince any potential competitor that entry will be unprofitable. To see how this might be done, put yourself in the posi­tion of an incumbent monopolist facing a prospective entrant, Firm X. Suppose that to enter the industry, Firm X will have to pay a (sunk) cost of Rs 80 crore to build a plant. You, of course, would like to induce Firm X to stay out of the industry. If X stays out, you can continue to charge a high price and enjoy monopoly profits. As shown in the upper right-hand corner of the payoff matrix in Table 16 (a), you would earn Rs 200 crore in profits.


Potential Entrant



Enter
Stay out






Incumbent
High price (accommodation)
100, 20
200, 0

Low price (warfare)
70, -10
130, 0








If Firm X does enter the market, you must make a decision. You can be "accommodating, " maintaining a high price in the hope that X will do the same. In that case, you will earn only Rs 100 crore in profit because you will have to share the market. New entrant X will earn a net profit of Rs 20 crore: Rs 100 crore minus the Rs 80 crore cost of constructing a plant. (This outcome is shown in the upper left-hand corner of the payoff matrix.) Alternatively, you can increase your production capacity, produce more, and lower your price. The lower price will give you a greater market share and a Rs 20 crore increase in revenues. Increasing production capacity, however, will cost Rs 50 crore, reducing your net profit to Rs 70 crore. Because warfare will also reduce the entrant's revenue by Rs 30 crore, it will have a net loss of Rs 10 crore. (This outcome is shown in the lower left-hand corner of the payoff matrix.) Finally, if Firm X stays out but you expand capacity and lower price nonethe­less, your net profit will fall by Rs 70 crore (from Rs 200 crore to Rs 130 crore): the Rs 50 crore cost of the extra capacity and a Rs 20 crore reduction in rev­enue from the lower price with no gain in market share. Clearly this choice, shown in the lower right-hand corner of the matrix, would make no sense.
If Firm X thinks you will be accommodating and maintain a high price after it has entered, it will find it profitable to enter and will do so. Suppose you threaten to expand output and wage a price war in order to keep X out. If X takes the threat seriously, it will not enter the market because it can expect to lose Rs 10 crore. The threat, however, is not credible. As Table 16(a) shows (and as the potential competitor knows), once entry has occurred, it will be in your best interest to accommodate and maintain a high price. Firm Xs rational move is to enter the market; the outcome will be the upper left-hand corner of the matrix.
But what if you can make an irrevocable commitment that will alter your incentives once entry occurs—a commitment that will give you little choice but to charge a low price if entry occurs? In particular, suppose you invest the Rs 50 crore now, rather than later, in the extra capacity needed to increase output and engage in competitive warfare should entry occur. Of course, if you later maintain a high price (whether or not X enters), this added cost will reduce your payoff.
We now have a new payoff matrix, as shown in Table 16(b).
Table 1.16(b)    Entry Deterrence


Potential Entrant



Enter
Stay out






Incumbent
High price (accommodation)
50, 20
150, 0

Low price (warfare)
70, -10
130, 0







 As a result of your decision to invest in additional capacity, your threat to engage in competi­tive warfare is completely credible. Because you already have the additional capacity with which to wage war, you will do better in competitive warfare than you would by maintaining a high price. Because the potential competitor now knows that entry will result in warfare, it is rational for it to stay out of the market. Meanwhile, having deterred entry you can maintain a high price and earn a profit of Rs 150 crore.
Can an incumbent monopolist deter entry without making the costly move of installing additional production capacity? Earlier we saw that a reputation for irrationality can bestow a strategic advantage. Suppose the incumbent firm has such a reputation. Suppose also that by means of vicious price-cutting, this firm has eventually driven out every entrant in the past, even though it incurred losses in doing so. Its threat might then be credible: The incumbent's irrational­ity suggests to the potential competitor that it might be better off staying away.
Of course, if the game described above were to be indefinitely repeated, then the incumbent might have a rational incentive to engage in warfare whenever entry actually occurs. Why? Because short-term losses from warfare might be outweighed by longer-term gains from preventing entry. Understanding this, the potential competitor might find the incumbent's threat of warfare credible and decide to stay out. Now the incumbent relies on its reputation for being rational—and far-sighted—to provide the credibility needed to deter entry. The success of this strategy depends on the time horizon and the relative gains and losses associated with accommodation and warfare.
We have seen that the attractiveness of entry depends largely on the way incumbents can be expected to react. In general, once entry has occurred, incum­bents cannot be expected to maintain output at their pre-entry levels. Eventually, they may back off and reduce output, raising price to a new joint profit-maximizing level. Because potential entrants know this, incumbent firms must create a credible threat of warfare to deter entry. A reputation for irra­tionality can help. Indeed, this seems to be the basis for much of the entry-preventing behavior that goes on in actual markets. The potential entrant must consider that rational industry discipline can break down after entry occurs. By fostering an image of irrationality and belligerence, an incumbent firm might convince potential entrants that the risk of warfare is too high.
There is an analogy here to nuclear deterrence Consider the use of a nuclear threat to deter the former Soviet Union from invading Western Europe during the Cold War If it invaded, would the United States actually react with nuclear weapons, knowing that the Soviets would then respond in kind7 Because it is not rational for the United States to react this way, a nuclear threat might not seem credible But this assumes that everyone is rational, there is a reason to fear an irrational response by the United States Even if an irrational response is viewed as very improbable, it can be a deterrent, given the costliness of an error The United States can thus game by promoting the idea that it might act irrationally, or that events might get out of control once an invasion occurs This is the "rationality of irrationality" See Thomas Schilling, The Strategy of Conflict (Harvard University Press, 1980).

STORIES OF GAME THEORY from Thinking Strategically by Dixit
Tit-for-tat is a variation of the “eye for an eye” rule of behaviour: do unto others as they have done onto you. More precisely, the strategy cooperates in the first period and from then on mimics the rival’s action from the previous period.
Tit-for-tat is as clear and simple as you can get. It is nice in that it never initiates cheating. It is provocable, that is, it never lets cheating go unpunished. And it is forgiving, because it does not hold a grudge for too long and is willing to restore cooperation.
In spite of all this, we believe that tit-for-tat is a flawed strategy. The slightest possibility of misperceptions results in a complete breakdown in the success of tit-for-tat.
For instance, in 1982 the United States responded to the Soviet spying and wiretapping of the U.S. embassy in Moscow by reducing the number of Soviet diplomats permitted to work in the United States. The Soviets responded by withdrawing the native support staff employed at the U.S. Moscow embassy and placed tighter limits on the size of the American delega­tion. As a result, both sides found it more difficult to carry out their diplomatic functions. Another series of tit-for-tat re­taliations occurred in 1988, when the Canadians discovered spying on the part of the visiting Soviet diplomats. They re­duced the size of the soviet delegation and the soviets reduced the Canadian representation in the Soviet Union. In the end, both countries were bitter, and future diplomatic cooperation was more difficult.
The problem with tit-for-tat is that any mistake “echoes” back and forth. One side punishes the other for a defection, and this sets off a chain reaction. The rival responds to the punishment by hitting back. This response calls for a sec­ond punishment. At no point does the strategy accept a pun­ishment without hitting back. The Israelis punish the Palestinians for an attack. The Palestinians refuse to accept the punishment and retaliate. The circle is complete and the punishments and reprisals become self-perpetuating.
What tit-for-tat lacks is a way of saying, “Enough is enough”. It is dangerous to apply this simple rule in situations in which misperceptions are endemic. Tit-for-tat is too easily provoked. You should be more forgiving when a defection seems to be a mistake rather than the rule. Even if the defection was inten­tional, after a long-enough cycle of punishments it may still be time to call it quits and try reestablishing cooperation. At the same time, you don't want to be too forgiving and risk exploitation. How do you make this trade-off?
But what happens if t{ere is a chance that one side misperceives the other's move?
No matter how unlikely a misperception is (even if it is one in ir trillion), in the long run tit-for-tat will spend half of its time cooperating and half defecting, just as a random strategy does. When the probability of a misperception is small, it will take a lot longer for the trouble to arise. But then once a mistake happens, it will also take a lot longer to clear it up.
The possibility of misperceptions means that you have to be more forgiving, but not forgetting, than simple tit-for-tat. This is true when there is a presumption that the chance of a misperception is small, say five percent. But what strategy would you adopt in a prisoners, dilemma in which there is a fifty percent chance that the other side will misinterpret (reverse) your actions? How forgiving should you be?
Once the probability of misunderstanding reaches fifty per cent there is no hope for achieving any cooperation in the pris­oners' dilemma. You should always defect. Why? Consider two extremes. Imagine that you always cooperate. Your opponent will misinterpret your moves half the time. As a result, he will believe that you have defected half the time and cooperated half the time. What if you always defect? Again, you will be misinterpreted half the time. Now this is to your benefit, as the opponent believes that you spend half your time cooperating.
No matter what strategy you choose, you cannot have any effect on what your partner sees. It is as if your partner flips a coin to determine what he thinks you did. There is simply no connection with reality once the probability of a mistake reaches fifty percent. Since you have no hope of influencing your partner's subsequent choices, you might as well defect. Each period you will gather a higher payoff and it won’t hurt you in the future.
The moral is that it pays to be more forgiving up to a point. Once the probability of mistakes gets too high, the possibility of maintaining cooperation in a prisoners’ dilemma breaks down. It is just too easy to be taken advantage of. The large chance of misunderstandings makes it impossible to send clear messages through your actions' Without an ability to commu­nicate through deeds, any hope for cooperation disappears.
A 50 percent chance of a misperception is the worst possi­ble case. If misperceptions were certain to occur you would interpret every message as its opposite, and there would be no misunderstandings.
Credibility is a problem with all strategic moves. If your unconditional move, or threat or promise, is purely oral, why should you carry it out if it turns out not to be in your interest to do so? But then others will look forward and reason back­ward to predict that you have no incentive to follow through, and your strategic move will not have the desired effect.
Establishing credibility in the strategic sense means that you are expected to carry out your unconditional moves, keep your promises, and make good on your threat.
The Eightfold Path to Credibility
Making your strategic moves credible is not easy. But it is not impossible, either to make a strategic move credible, you must take a supporting or collateral action such an action is called commitment.
Offer eight devices for achieving credible commit­ments. Like the Buddhist prescription for Nirvana, they call this the "eightfold path" to credibility. Depending on the cir­cumstances, one or more of these tactics may prove effective for you. Behind this system are three underlying principles.
The first principle is to change the payoffs of the game. The idea is to make it in your interest to follow through on your commitment: turn a threat into a warning, a promise into an assurance. This can be done through a variety of ways.
1.  Establish and use a reputation.
2.  Write contracts.
Both these tactics make it more costly to break the commit­ment than to keep it.
A second avenue is to change the game to limit your ability to back out of a commitment. In this category, we consider three possibilities. The most radical is simply to deny yourself any opportunity to back down, either by cutting yourself off from the situation or by destroying any avenues of retreat. There is even the possibility of removing yourself from the decision-making position and leaving the outcome to chance.
3.  Cut off communication.
4.  Burn bridges behind you.
5.  Leave the outcome to chance.
These two principles can be combined: both the possible actions and their outcomes can be changed. If a large commit­ment is broken down into many smaller ones, then the gain from breaking a little one may be more than offset by the loss of the remaining contract. Thus we have
6.  Move in small steps.
A third route is to use others to help you maintain com­mitment. A team may achieve credibility more easily than an individual. Or you may simply hire others to act in your behalf.
7.  Develop credibility through teamwork.
8.  Employ mandated negotiating agents.
Reputation
If you try a strategic move  in a game and then back off, you may lose your reputation for credibility. In a once-in-a-lifetime situation, reputation may be unimportant and therefore of lit­tle commitment value. But, you typically play several games with different rivals at the same time, or the same rivals at different times. Then you have an incentive to establish a reputation, and this serves as a commitment that makes your strategic moves credible.
During the Berlin crisis in 1961, John F. Kennedy explained the importance of the U.S. reputation:
Another example is Israel's standing policy not to negotiate with terrorists. This is a threat intended to deter terrorist s from taking hostages to barter for ransom or release of prisoners.   If the no-negotiation threat is credible, terrorists will come to recognize the futility of their actions. In the meantime, Israel's resolve will be tested.  Each time the threat must be carried out, Israel suffers; a refusal to compromise may sacrifice Israeli hostages' lives. Each confrontation with terrorists puts Israel's reputation and credibility on the line.  Giving m means more than just meeting the current demands; it makes future terrorism more attractive.
Reputation effect is a two-edged sword for commitment. Sometimes destroying your reputation can create the possi­bility for a commitment. Destroying your reputation commits you not to take actions in the future that you can predict will not be in your best interests.
The question of whether to negotiate with hijackers helps illustrate the point. Before any particular hijacking has occurred, the government might decide to deter hijackings by threatening never to negotiate. However the hijackers predict that after they commandeer the jet, the government will find it impossible to enforce a no-negotiation posture. How can a government deny itself the ability to negotiate with hijackers? One answer is to destroy the credibility of its promises. Imagine that after reaching a negotiated settlement, the gov­ernment breaks its commitment and attacks the hijackers. This destroys any reputation the government has for trust­worthy treatment of hijackers. It loses the ability to make a credible promise, and irreversibly denies itself the tempta­tion to respond to a hijacker's threat. This destruction of the credibility of a promise makes credible the threat never to ne­gotiate.
Contracts
A straightforward way to make your commitment credible is to agree to a punishment if you fail to follow through. If your kitchen remodeler gets a large payment up front, he is tempted to slow down the work. But a contract that specifies payment linked to the progress of the work and penalty clauses for de­lay can make it in his interest to stick to the schedule. The contract is the commitment device.
Actually, it's not quite that simple. Imagine that a diet­ing man offers to pay $500 to anyone who catches him eating fattening food. Every time the man thinks of a dessert he knows that it just isn't worth $500. Don't dismiss this example as incredible; just such a contract was offered by a Mr. Nick Russo — except the amount was $25,000. According to the Wall Street Journal:
So, fed up with various weight-loss programs, Mr. Russo decided to take his problem to the public. In addition to going on a 1,000-cirlorie-a-day diet, he is offering a bounty —$25,000 to the charity of one's choosing — to anyone who spots him eating in a restaurant. He has peppered local eateries ... with "wanted" pictures of himself.
But this contract has a fatal flaw: there is no mechanism to prevent renegotiation. With visions of ├ęclairs dancing in his head, Mr. Russo should argue that under the present contractual agreement, no one will ever get the $25,000 penalty since he will never violate the contract. Hence, the contract is worthless. Renegotiation would be in their mutual interest. For example, Mr. Russo might offer to buy a round of drinks in exchange for being released from the contract. The restaurant diners prefer a drink to nothing and let him out of the contract.
For the contracting approach to be successful, the party that enforces the action or collects the penalty must have some independent incentive to do so. In the dieting problem, Mr. Russo's family might also want him to be skinnier and thus not be tempted by a mere free drink.
The contracting approach is better suited to business deal­ings. A broken contract typically produces damages, so that the injured party is not willing to give up on the contract for naught. For example, a producer might demand a penalty from a supplier who fails to deliver. The producer is not indif­ferent about whether the supplier delivers or not. He would rather get his supply than receive the penalty sum. Renegotiating the contract is no longer a mutually attractive option.
Cutting off Communication
Cutting off communication succeeds as a credible commitment device because it can make an action truly irreversible. An ex­treme form of this tactic arises in the terms of a last will and testament. Once the party has died, renegotiation is virtually impossible. (For example, it took an act of the British parlia­ment to change Cecil Rhodes's will in order to allow female Rhodes Scholars.) In general, where there is a will, there is a way to make your strategy credible.
For example, most universities set a price for endowing a chair. The going rate is about $1.5 million. These prices are not carved in stone (nor covered with ivy). Universities have been known to bend their rules in order to accept the terms and the money of deceased donors who fail to meet the current prices.  
Burning Bridges behind You
Armies often achieve commitment by denying themselves an opportunity to retreat. This strategy goes back at least to 1066, when WiIIiam the Conqueror's invading army burned its own ships, thus making an unconditional commitment to fight rather than retreat. Cort6s followed the same strategy in his conquest of Mexico. Upon his arrival in Cempoalla, Mexico, he gave orders that led to all but one of his ships being burnt or disabled. Although his soldiers were vastly outnumbered, they had no other choice but to fight and win. "Had [Cortes] failed, it might well seem an act of madness.... Yet it was the fruit of deliberate calculation.... There was no alternative in his mind but to succeed o. perish.
Leaving the Outcome beyond Your Control
The doomsday device in the movie Dr. Strangelove consisted of large buried nuclear bombs whose explosion would emit enough radioactivity to exterminate all life on earth. The de­vice would be detonated automatically in the event of an attack on the Soviet Union. When President Milton Muffley of the United States asked if such an automatic trigger was pos­sible, Dr. Strangelove answered: "It is not merely possible; it is essential."
The device is such a good deterrent because it makes ag­gression tantamount to suicide. Faced with an American at­tack, Soviet premier Dimitri Kissov might refrain from retali­ating and risking mutually assured destruction. As long as the Soviet premier has the freedom not to respond, the Americans might risk an attack. But with the doomsday device in place, the Soviet response is automatic and the deterrent threat is credible.
However, this strategic advantage does not come without a cost. There might be a small accident or unauthorized attack, after which the Soviets would not want to carry out their dire threat, but have no choice as execution is out of their control. This is exactly what happened in Dr. Strangelove.
To reduce the consequences of errors, you want a threat that is no stronger than is necessary to deter the rival. What do you do if the action is indivisible, as a nuclear explosion surely is? You can make the threat milder by creating a risk, but not a certainty, that the dreadful event will occur. This is Thomas Schelling's idea of brinkmanship.
Moving in Steps
Although two parties may not trust each other when the stakes are large, if the problem of commitment can be reduced to a small-enough scale, then the issue of credibility will resolve itself. The threat or promise is broken up into many pieces, and each one is solved separately.
Honor among thieves is restored if they have to trust each other only a little bit at a time. Consider the difference be­tween making a single $1 million payment to another person for a kilogram of cocaine and engaging in 1,000 sequential transactions with this other party, with each transaction lim­ited to $1,000 worth of cocaine. While it might be worthwhile to double-cross your "partner" for $1 million, the gain of $1,000 is too small, since it brings a premature end to a profitable on­going relationship.
Whenever a large degree of commitment is infeasible, one should make do with a small amount and reuse it frequently.
Teamwork
Often others can help us achieve credible commitment. Al­though people may be weak on their own, they can build re­solve by forming a group. The successful use of peer pressure to achieve commitment has been made famous by Alcoholics Anonymous (and diet centers too). The AA approach changes the payoffs from breaking your word. It sets up a social insti­tution in which pride and self-respect are lost when commit­ments are broken.
Mandated Negotiating Agents
If a worker says he cannot accept any wage increase less than 5 percent, why should the employer believe that he will not subsequently back down and accept 4 percent? Money on the table induces people to try negotiating one more time.
The worker's situation can be improved if he has someone else negotiate for him. When the union leader is the negotia­tor, his position may be less flexible. He may be forced to keep his promise or lose support from his electorate. The union leader may secure a restrictive mandate from his members, or put his prestige on the line by declaring his inflexible position in public. In effect, the labor leader becomes a mandated ne­gotiating agent. His authority to act as a negotiator is based on his position. In some cases he simply does not have the au­thority to compromise; the workers, not the leader, must ratify the contract. In other cases, compromise by the leader would result in his removal.
In practice we are concerned with the means as well as the ends of achieving commitment. If the labor leader voluntar­ily commits his prestige to a certain position, should you (do you) treat his loss of face as you would if it were externally imposed? Someone who tries to stop a train by tying himself to the railroad tracks may get less sympathy than someone else who has been tied there against his will.

No comments:

Post a Comment