# STRATEGY AN INTRODUCTION TO GAME THEORY PDF

strategy third edition joel watson strategy an introduction to game theory joel watson university of california, san diego third edition bestthing.info Strategy: An Introduction to Game Theory Second Edition Instructors' Manual Joel Watson with Jesse Bull April 1 Strategy: An Introduction to Game Theory. Joel Watson is Professor of Economics at the University of California, San Diego. He received his BA from UCSD and his PhD from Stanford. Watson is one of.

Author: | NEDA CARLONI |

Language: | English, Dutch, Portuguese |

Country: | Pakistan |

Genre: | Health & Fitness |

Pages: | 690 |

Published (Last): | 19.08.2015 |

ISBN: | 455-3-57181-181-1 |

ePub File Size: | 16.33 MB |

PDF File Size: | 9.76 MB |

Distribution: | Free* [*Sign up for free] |

Downloads: | 47439 |

Uploaded by: | WHITNEY |

Mar 5, 𝗣𝗗𝗙 | Sound managerial decision making often requires “putting yourself behind An Introduction to Game Theory and Business Strategy. Jul 20, The perfect balance of readability and formalism. Joel Watson has refined his successful text to make it even more student-friendly. A number of. Also by John C. Maxwell.. of the evening, as Steve and I were walking to our car, he said to me, “John, I bet That GAMES AND INFORMATION, THIRD.

Behavior at joint decision nodes is characterized by the standard bargaining solution. Thus, a game with joint decision nodes is a hybrid representation, with cooperative and noncooperative components. The concept of a negotiation equilibrium combines sequential rationality at individual decision nodes with the standard bargaining solution at joint decision nodes. The chapter illustrates the ideas with an example of an incentive contract.

Lecture Notes Here is an outline for a lecture. In many strategic settings, negotiation is just one of the key compo- nents. Recall the pictures and notation from Chapter Tree Rule 6. Then, using the standard bargaining solution, determine the optimal contract and how the surplus will be divided. Agency incentive contracting. You can run a classroom experiment where three students interact as follows. Students 1 and 2 have to play a matrix game. The contract between students 1 and 3 which you enforce can specify transfers between them as a function of the matrix game outcome.

You can arrange the experiment so that the identities of students 1 and 3 are not known to student 2 by, say, allowing many pairs of students to write contracts and then selecting a pair randomly and anonymously, and by paying them privately. After the experiment, discuss why you might expect t, rather than s, to be played in the matrix game. Ocean liner shipping-contract example. A producer who wishes to ship a moder- ate shipment of output say three or four full containers overseas has a choice of three ways of shipping the product.

He can contract directly with the shipper, he can contract with an independent shipping contractor who has a contract with a shipper , or he can use a trade association that has a contract with a shipper. Shipping the product is worth to the producer. Suppose that the producer only has time to negotiate with one of the parties because his product is perishable, but in the event of no agreement he can use the trade association. An example is developed in which one of the players must choose whether to invest prior to production taking place.

In the second version, parties can contract up front; here, option contracts are shown to provide optimal incentives. The chapter also comments on how asset ownership can help alleviate the hold up problem. Related to externality. Then determine the rational investment choice. Describe how option con- tracts work and are enforced.

Calculate and describe the negotiation equilibrium. This may not be true in general. If the outside asset value rises too quickly with the investment, then the investor may have the incentive to overinvest. You can also present the analysis of, or run an experiment based on, a game like that of the Guided Exercise in this chapter.

A Nash-punishment folk theorem is stated at the end of the chapter. Lecture Notes A lecture may be organized according to the following outline. Thus, the equilibria of the subgames are the same as those of the stage game. Grim trigger. The folk theorem. Two-period example.

You can also run a classroom experiment based on such a game. Have the students communicate in advance either in pairs or as a group to agree on how they will play the game. That is, have the students make a self-enforced contract. This will hopefully get them thinking about history-dependent strategies. Plus, it will reinforce the interpretation of equilibrium as a self-enforced contract, which you may want to discuss near the end of a lecture on reputation and repeated games.

The Princess Bride reputation example. At the beginning of your lecture on reputation, you can play the scene from The Princess Bride in which Wesley is reunited with the princess. Just before he reveals his identity to her, he makes interesting comments about how a pirate maintains his reputation. This example reinforces the basic analytical exercise from Chapter The section on international trade is a short verbal discussion of how reputation functions as the mechanism for self-enforcement of a long-term contract.

Other applications can also be presented, in addition to these or substituting for these. For each application, it may be helpful to organize the lecture as follows. The Princess Bride second reputation example. While in the swamp, Wesley explains how a reputation can be associated with a name, even if the name changes hands over time. Repeated Cournot oligopoly experiment. Let three students interact in a re- peated Cournot oligopoly. This may be set as an oil or some other commodity production game.

It may be useful to have the game end probabilistically. This may easy to do if it is done by e-mail, but may require a set time frame if done in class. The interaction can be done in two scenarios. Another abstract example follows. Private information about preferences: For example, the downloader knows his own valuation of the good, which the seller does not observe.

Nature moves at chance nodes, which are represented as open circles. In the incomplete-information version, Nature picks with equal probabilities the door behind which the prize is concealed and Monty randomizes equally between alternatives when he has to open one of the doors. This game also makes a good example see Exercise 4 in Chapter 24 of the textbook. For an experiment, describe the good as a soon-expiring check made out to player 2. You show player 2 the amount of the check, but you seal the check in an envelop before giving it to player 1 who bargains over the terms of trading it to player 2.

Signaling games. It may be worthwhile to describe a signaling game that you plan to analyze later in class. The Price is Right. The bidding game from this popular television game show forms the basis for a good bonus question. See also Exercise 5 in Chapter 25 for a simpler, but still challenging, version. In the game, four contestants must guess the price of an item. Suppose none of them knows the price of the item initially, but they all know that the price is an integer between 1 and 1, In fact, when they have to make their guesses, the contestants all believe that the price is equally likely to be any number between 1 and 1, The players make their guesses sequentially.

Player 3 next chooses a number, followed by player 4. After the players make their guesses, the actual price is revealed. The other players get 0. The bonus question is: There is a move of Nature a random produc- tive outcome. Because Nature moves last, the game has complete information.

Thus, it can be analyzed using subgame perfect equilibrium. An example helps explain the notions of risk aversion and risk premia. Then a streamlined principal-agent model is developed and fully analyzed. Lecture Notes Analysis of the principal-agent problem is fairly complicated. Instructors will not likely want to develop in class a more general and complicated model than the one in the textbook.

Concavity, linearity, etc. Risk neutral principal. Discuss risk aversion and risk premia. Two methods are presented. The two methods are equivalent whenever all types are realized with positive probability an innocuous assumption for static settings.

The second method is shown to be useful when there are continuous strategy spaces, as illustrated using the Cournot duopoly with incomplete information. Examples and Experiments You can run a common- or private-value auction experiment or a lemons experi- ment in class as a transition to the material in Chapter You might also consider simple examples to illustrate the method of calculating best responses for individual player-types. These settings are studied using static models, in the Bayesian normal form, and the games are analyzed using the techniques discussed in the preceding chapter.

The lemons model is quite simple; a lemons model that is more general than the one in the textbook can easily be covered in class.

The auction analysis, on the other hand, is more complicated. The major sticking points are a explaining the method of assuming a parameterized form of the equilibrium strategies and then calculating best responses to verify the form and determine the parameter, b the calculus required to calculate best responses, and c double integration to establish revenue equivalence. One can skip c with no problem.

Note whether the equilibrium is unique. Lemons experiment. Let one student be the seller of a car and another be the potential downloader.

Prepare some cards with values written on them. Tell them that whomever has the card in the end will get paid.

If student 1 has the card, then she gets the amount written on it. Stock trade and auction experiments. You can run an experiment in which randomly-selected students play a trading game like that of Exercise 8 in this chapter. Have the students specify on paper the set of prices at which they are willing to trade. You can also organize the interaction as a common-value auc- tion, or run any other type of auction in class. The gift game is utilized through- out the chapter to illustrate the key ideas.

First, the example is used to demonstrate that subgame perfection does not adequately represent sequential rationality. Then comes the notion of conditional belief, which is presented as the belief of a player at an information set where he has observed the action, but not the type, of another player.

A simple signaling game will do. Initial belief about types; updated posterior belief. Note that conditional beliefs are unconstrained at zero-probability information sets. Conditional probability demonstration.

This could be done several times, and the color revealed following the guess. Then a male and female student could be selected, and a student could be asked to guess who has, for example, the red card. Signaling game experiment. It may be instructive to play in class a signaling game in which one of the player-types has a dominated strategy.

The variant of the gift game discussed at the beginning of Chapter 28 is such a game. The Princess Bride signaling example. A scene near the end of The Princess Bride movie is a good example of a signaling game. The scene begins with Wesley lying in a bed. The prince enters the room. The prince does not know whether Wesley is strong or weak. Wesley can choose whether or not to stand. This game can be diagrammed and discussed in class. Exercise 6 in this chapter sketches one model of this strategic setting.

The repu- tation model illustrates how incomplete information causes a player of one type to pretend to be another type, which has interesting implications. The extensive form tree of the job-market signaling model is in the standard signaling-game format, so this model can be easily presented in class.

**Other books:**

*REGRESSION MODELING STRATEGIES FRANK HARRELL PDF*

Examples and Experiments In addition to, or in place of, the applications presented in this chapter, you might lecture on the problem of contracting with adverse selection. Exercise 9 of Chapter 29 would be suitable as the basis for such a lecture. Your students can con- sult this appendix to brush up on the mathematics skills that are required for game theoretic analysis. As noted at the beginning of this manual, calculus is used spar- ingly in the textbook and it can be avoided.

Appendix B gives some of the details of the rationalizability construction. Three challenging mathematical exercises appear at the end of Appendix B. Although we worked diligently on these solutions, there are bound to be a few typos here and there. Please report any instances where you think you have found a substantial error.

The order does not matter as it is a simultaneous move game. Exercise 1: Exercise 4: A strat- egy for the manager must specify an action to be taken in every contin- gency. Player 2 has 4 strategies: Some possible extensive forms are shown below and on the next page.

Matching Pennies: Battle of the Sexes: Pareto Coordination: X dominates Z. So we can iteratively delete dom- inated strategies. U dominates D. When D is ruled out, R dominates C. The order does not matter because if a strategy is domi- nated not a best response relative to some set of strategies of the other player, then this strategy will also be dominated relative to a smaller set of strategies for the other player.

If s1 is rationalizable, then s2 is a best response to a strategy of player 1 that may rationally be played. Thus, player 2 can rationalize strategy s2.

So player 10 has a single undominated strategy, 0. Given this, we know a will be at most 9 if everyone except player 10 selects 9. We label the regions as shown below. Noticing the symmetry makes this easier. It is easy to see that if the regions are divided in half between 5 and 6 that is distributed to each half.

In any of these outcomes, each candidate receives the same number of votes. This yields the following graph of best response functions. Because the players Instructors' Manual for Strategy: The best response functions are represented below.

Figure 7. The Nash equilibrium is B,Z. The Nash equilibrium is M,R.

The Nash equilibria are stag,stag and hare,hare. Chapter 4, Exercise 2: Chapter 5, Exercise 1: The Nash equilibrium is D,R. Exercise 3: No Nash equilibrium.

B,X is a Nash equilibrium has no implications for x. Note that 3, 4, 5, 6, and 7 are all best responses to beliefs that put probability. Thus, in round two, play will be movie, opera. Then in round three, play will be opera, movie. This cycle will continue with no equilibrium being reached. The non-strict Nash equilibrium will not be played all of the time.

It must be that one or both players will play a strategy other than his part of such a Nash equilibrium with positive probability. Thus, in the long run si will not be played. It is an equilibrium if 6 players select Z, 3 select X, and 2 select Y. The number of equilibria is This is represented in the graph below. However there is no such number. These are represented at the top of the next page. For L, voting for McClintock is dominated by voting for Bustamante.

So neither L nor M will vote for McClintock. We can then show that L does strictly better by voting for Bustamante than voting for Schwarzenegger for any strategies of the others assuming M does not vote for McClintock.

## Customers who viewed this item also viewed

Knowing this, M will vote for Schwarzenegger. Thus, C will vote for Schwarzenegger. If player x selects G she gets 1. If she selects F, she gets 2m. If player x believes that no one else will play F, then her best response is G.

If she believes that everyone else will play F, then her best response is F. There is a symmetric Nash equilibrium in which everyone plays F. There is another symmetric Nash equilibrium in which everyone plays G. This means that in the next round, m has decreased by After two rounds, we get that G is the only rationalizable strategy for everyone.

There is enough information. Thus, player 1 will never play D, and player 2 will never play L. Let q denote the probability with which M is played.

## Strategy: An Introduction to Game Theory (Third - Dutra economicus

Here p denotes the probability with which U is played and r denotes the probability with which C is played. First game: The normal form is represented below. Let q be the prob- ability that player 2 selects X. There are, however, mixed strategy equilibria in which player 1 selects C with probability 1 that is, plays a pure strategy and player 2 mixes between X and Y.

Second game: The normal form of this game is represented below. Clearly, there is no equilibrium in which player 1 selects ID with positive probability. There is also no equilibrium in which player 1 selects IU with positive probability, for, if this were the case, then player 2 strictly prefers O and, in response, player 1 should not pick IU. Note that this decreases as the number of bystanders n goes up. It is easy to see that if there is no pure strategy Nash equilibrium, then only one of each of these pairs of conditions can hold.

This implies that each pure strategy of each player is a best response to some other pure strategy of the other. Consider player 1. The analogous argument can be made with respect to player 2. No, it does not have any pure strategy equilibria. Route b is dominated by a mixture of routes a and c. Further, can mix so that c is a best response for Since b is dominated, we now consider a mixture by over a and d. Thus, we need Instructors' Manual for Strategy: Thus we should expect that, so long as the ratio of a to d is kept the same, could also play c with positive probability.

Let p denote the probability with which plays a, and let q denote the probability with which he plays c. Examples include chess, checkers, tic-tac-toe, and Othello. Let i be one of the players and let j be the other player. For the same reason, si is a best response to tj. Consider N,I. There are pure strategy equilibria that achieve this outcome, but it never happens in the symmetric mixed strategy equilibrium.

It must be possible to convey information to the court in order to have a transfer imposed. Restitution damages takes from the breacher the amount of his gain from breaching. The externally-enforced component is a transfer of at least 1 from player 2 to player 1 when I, N occurs, a transfer of at least 2 from player 1 to player 2 when N, I occurs, and none otherwise.

For technology B, the self-enforced component is to play I, I.

## Course: Undergraduate Game Theory

The externally-enforced compo- nent is a transfer of at least 4 from player 1 to player 2 when N, I occurs, and none otherwise. There is no externally-enforced component. For B the self-enforced component is to transfer 4 from player 1 to player 2 when someone plays N, and no transfer when both play I. Reliance damages seek to put the non-breaching party back to where he would have been had he not relied on the contract.

Restitution damages take the gain that the breaching party receives due to breaching. Examples include the employment contracts of salespeople, attorneys, and professors. No general rule.

Clearly, the extensive form of this game will contain dashed lines. Consider Exer- cise 3 a of Chapter 4.

**Related Post:**

*INTRODUCTION TO SOCIOLOGY PDF*

The normal form of this does not exhibit imperfect information. Suppose not. Thus, in round 4 player 2 will choose S. Each player selects A or B, picks a positive number when A, B is chosen, and picks a positive number when B, A is chosen.

Further, B will never be selected in equilibrium. The Nash equilibria of this game are given by Ax1 , Ax2 , where x1 and x2 are any positive numbers.

Because this is a simultaneous move game, we are just looking for the Nash equilibrium of the following normal form. The equilibrium is L, L. Thus, in the subgame perfect equilibrium both players invest 50, in the low production plant. The optimal pricing scheme is as follows. Hal downloads in period 1 and Laurie downloads is period 2.

The subgame perfect equilibrium is for player 1 to locate in region 5, and for player 2 to use the strategy where, for example, 2 denotes that player 2 locates in region 2 when player 1 has located in region 1. An example of this is below.

To win the game, a player must not be forced to enter the top-left cell Z; thus, a player would lose if he must move with the rock in either cell 1 or cell 2 as shown in the following diagram. A player who is able to move the rock into cell 1 or cell 2 thus wins the game. This implies that a player can guarantee victory if he is on the move when the rock is in one of cells 3, 4, 5, 6, or 7, as shown in the diagram below. We next see that a player who must move from cell 8, cell 9 or cell 10 shown below will lose.

Otherwise, player 1 has a winning strategy. This can be solved by backward induction. Let x, y denote the state where the red basket contains x balls and the blue basket contains y balls. To win this game, a player must leave her opponent with either 0,1 or 1,0. So, to win, a player should leave her opponent with 2, 2. As player 1 begins in cell Y, he must enter a cell marked with an X. Thus, player 2 has a strategy that ensures a win.

There is a subgame perfect equilibrium in which player 1 wins, another in which player 2 wins, and still another in which player 3 wins. Player 1 has a strategy that guarantees victory. This is easily proved using a contradiction argument. Suppose player 1 does not have a strategy guaranteeing victory. Then player 2 must have such a strategy.

This means that, for every opening move by player 1, player 2 can guarantee victory from this point. This means that player 1 has a strategy that guarantees him a win, which contradicts what we assumed at the beginning. Thus, player 1 actually does have a strategy that guarantees him victory, regardless of what player 2 does.

A winning strategy is known for the Instructors' Manual for Strategy: Player 2 will then rationally choose cell 1,2 and force player 3 to move into cell 1,1. Possible examples would include salary negotiations, merger negotiations, and negotiating the download of an automobile. More precisely, in the least, you can wait until the last period, at which point you can get the entire surplus the owner will accept anything then.

Player 1 Instructors' Manual for Strategy: It is not possible to do both if c is close to 0. In other words, the legal fee deters frivolous suits from player 1, while not getting in the way of justice in the event that player 2 deviates. This prevents the support of H, H. The game is represented as below. Since there are many combinations of q1 and q2 that satisfy this equation, there are multiple equilibria.

The gain from having the maximum surplus outweighs the additional cost. Note that the total quantity 40 is less than both the standard Cournot output and the monopoly output. The seller will not choose H. Views Total views. Actions Shares. Embeds 0 No embeds. No notes for slide. A number of sections have been added, and numerous chapters have been substantially revised.. Dozens of new exercises have been added, along with solutions to selected exercises.

## Osborne M.J. An Introduction to Game Theory

Chapters are short and focused, with just the right amount of mathematical content and end-of-chapter exercises. New passages walk students through tricky topics. Download or read Aqualeo's The Book of Strategy: You just clipped your first slide!

Clipping is a handy way to collect important slides you want to go back to later. Now customize the name of a clipboard to store your clips. Visibility Others can see my Clipboard. Cancel Save. It includes supplementary notes on rationaliazability, partnership games and forward induction. It combines a personal memoir with an introduction to some central concepts of modern economic thought.

Rubinstein describes mathematical models as fables- existing between fantasy and reality, illuminating but not accurately portraying the real world. It has ten sections and includes a glossary. The final version was published in volume 2 of the Encyclopedia of Information Systems in As well as a lay introduction to each prize winner's research, there are "Advanced information" links giving a more technical explanation.

This link is to the Economics Network's quick index of lecture videos and related materials on the site. Each video is a full lecture usually between 40 and 60 minutes with good audio and video quality, and pitched at a non-technical audience. Transcripts of each lecture are available. Licence: All Rights Reserved Jim Ratliff's graduate-level course in game theory Jim Ratliff, University of Arizona These are the extensive materials used in a course "taught to students in their second year of the economics PhD program at the University of Arizona during the period.

**Also read:**

*THE CONFIDENCE GAME MARIA KONNIKOVA PDF*

It includes a simple interactive game in Javascript. Links at the top of the page take you to an interactive quiz on the topic and a game solver. It has been produced by Stefan Waner and Steven R. Costenoble of Hofstra University. The course contains many case studies.Neither the high nor low type has the incentive to deviate.

The Strategy of Conflict. Exercise 1: Education would not be a useful signal in this setting. For the rules of Chomp, see Exercise 5 in Chapter See our Privacy Policy and User Agreement for details.

If the seller performed, the downloader has the evidence with probability. Another tournament or challenge. This may be set as an oil or some other commodity production game.

### Related articles:

- AN INTRODUCTION TO COPULAS NELSEN DOWNLOAD
- STRATEGIC MARKETING CRAVENS PDF
- INTRODUCTION TO COMPUTER SIMULATION METHODS PDF
- 33 STRATEGIES OF WAR EBOOK
- LEARNING IOS GAME PROGRAMMING PDF
- INTRODUCTION TO PIPE STRESS ANALYSIS BY SAM KANNAPPAN PDF
- INTRODUCTION TO ARTIFICIAL INTELLIGENCE BY EUGENE CHARNIAK PDF
- LIVRO PEQUENO PRINCIPE GRATIS PDF
- FINANCIAL INSTRUMENTS GUIDE 2015 ABLE PDF