Lecture 2 Strategic-form games
- 格式:pdf
- 大小:165.03 KB
- 文档页数:18
Last Time:Defined knowledge, common knowledge, meet (of partitions), and reachability.Reminders:• E is common knowledge at ω if ()I K E ω∞∈.• “Reachability Lemma” :'()M ωω∈ if there is a chain of states 01,,...m 'ωωωωω== such that for each k ω there is a player i(k) s.t. ()()1()(i k k i k k h h )ωω+=:• Theorem: Event E is common knowledge at ωiff ()M E ω⊆.How does set of NE change with information structure?Suppose there is a finite number of payoff matrices 1,...,L u u for finite strategy sets 1,...,I S SState space Ω, common prior p, partitions , and a map i H λso that payoff functions in state ω are ()(.)u λω; the strategy spaces are maps from into . i H i SWhen the state space is finite, this is a finite game, and we know that NE is u.h.c. and generically l.h.c. in p. In particular, it will be l.h.c. at strict NE.The “coordinated attack” game8,810,11,100,0A B A B-- 0,010,11,108,8A B A B--a ub uΩ= 0,1,2,….In state 0: payoff functions are given by matrix ; bu In all other states payoff functions are given by . a upartitions of Ω1H : (0), (1,2), (3,4),… (2n-1,2n)... 2H (0,1),(2,3). ..(2n,2n+1)…Prior p : p(0)=2/3, p(k)= for k>0 and 1(1)/3k e e --(0,1)ε∈.Interpretation: coordinated attack/email:Player 1 observes Nature’s choice of payoff matrix, sends a message to player 2.Sending messages isn’t a strategic decision, it’s hard-coded.Suppose state is n=2k >0. Then 1 knows the payoffs, knows 2 knows them. Moreover 2 knows that 1knows that 2 knows, and so on up to strings of length k: . 1(0n I n K n -Î>)But there is no state at which n>0 is c.k. (to see this, use reachability…).When it is c.k. that payoff are given by , (A,A) is a NE. But.. auClaim: the only NE is “play B at every information set.”.Proof: player 1 plays B in state 0 (payoff matrix ) since it strictly dominates A. b uLet , and note that .(0|(0,1))q p =1/2q >Now consider player 2 at information set (0,1).Since player 1 plays B in state 0, and the lowest payoff 2 can get to B in state 1 is 0, player 2’s expected payoff to B at (0,1) is at least 8. qPlaying A gives at most 108(1)q q −+−, and since , playing B is better. 1/2q >Now look at player 1 at 1(1,2)h =. Let q'=p(1|1,2), and note that '1(1)q /2εεεε=>+−.Since 2 plays B in state 1, player 1's payoff to B is at least 8q';1’s payoff to A is at most -10q'+8(1-q) so 1 plays B Now iterate..Conclude that the unique NE is always B- there is no NE in which at some state the outcome is (A,A).But (A,A ) is a strict NE of the payoff matrix . a u And at large n, there is mutual knowledge of the payoffs to high order- 1 knows that 2 knows that …. n/2 times. So “mutual knowledge to large n” has different NE than c.k.Also, consider "expanded games" with state space . 0,1,....,...n Ω=∞For each small positive ε let the distribution p ε be as above: 1(0)2/3,()(1)/3n p p n ee e e -==- for 0 and n <<∞()0p ε∞=.Define distribution by *p *(0)2/3p =,. *()1/3p ∞=As 0ε→, probability mass moves to higher n, andthere is a sense in which is the limit of the *p p εas 0ε→.But if we do say that *p p ε→ we have a failure of lower hemi continuity at a strict NE.So maybe we don’t want to say *p p ε→, and we don’t want to use mutual knowledge to large n as a notion of almost common knowledge.So the questions:• When should we say that one information structure is close to another?• What should we mean by "almost common knowledge"?This last question is related because we would like to say that an information structure where a set of events E is common knowledge is close to another information structure where these events are almost common knowledge.Monderer-Samet: Player i r-believes E at ω if (|())i p E h r ω≥.()r i B E is the set of all ω where player i r- believesE; this is also denoted 1.()ri B ENow do an iterative definition in the style of c.k.: 11()()rr I i i B E B E =Ç (everyone r-believes E) 1(){|(()|())}n r n ri i I B E p B E h r w w -=³ ()()n r n rI i i B E B =ÇEE is common r belief at ω if ()rI B E w ¥ÎAs with c.k., common r-belief can be characterized in terms of public events:• An event is a common r-truism if everyone r -believes it when it occurs.• An event is common r -belief at ω if it is implied by a common r-truism at ω.Now we have one version of "almost ck" : An event is almost ck if it is common r-belief for r near 1.MS show that if two player’s posteriors are common r-belief, they differ by at most 2(1-r): so Aumann's result is robust to almost ck, and holds in the limit.MS also that a strict NE of a game with knownpayoffs is still a NE when payoffs are "almost ck” - a form of lower hemi continuity.More formally:As before consider a family of games with fixed finite action spaces i A for each player i. a set of payoff matrices ,:l I u A R ->a state space W , that is now either finite or countably infinite, a prior p, a map such that :1,,,L l W®payoffs at ω are . ()(,)()w u a u a l w =Payoffs are common r-belief at ω if the event {|()}w l w l = is common r belief at ω.For each λ let λσ be a NE for common- knowledgepayoffs u .lDefine s * by *(())s l w w s =.This assigns each w a NE for the corresponding payoffs.In the email game, one such *s is . **(0)(,),()(,)s B B s n A A n ==0∀>If payoffs are c.k. at each ω, then s* is a NE of overall game G. (discuss)Theorem: Monder-Samet 1989Suppose that for each l , l s is a strict equilibrium for payoffs u λ.Then for any there is 0e >1r < and 1q < such that for all [,1]r r Î and [,1]q q Î,if there is probability q that payoffs are common r- belief, then there is a NE s of G with *(|()())1p s s ωωω=>ε−.Note that the conclusion of the theorem is false in the email game:there is no NE with an appreciable probability of playing A, even though (A,A) is a strict NE of the payoffs in every state but state 0.This is an indirect way of showing that the payoffs are never ACK in the email game.Now many payoff matrices don’t have strictequilibria, and this theorem doesn’t tell us anything about them.But can extend it to show that if for each state ω, *(s )ω is a Nash (but not necessarily strict Nash) equilibrium, then for any there is 0e >1r < and 1q < such that for all [,1]r r Î and [,1]q q Î, if payoffs are common r-belief with probability q, there is an “interim ε equilibria” of G where s * is played with probability 1ε−.Interim ε-equilibria:At each information set, the actions played are within epsilon of maxing expected payoff(((),())|())((',())|())i i i i i i i i E u s s h w E u s s h w w w w e-->=-Note that this implies the earlier result when *s specifies strict equilibria.Outline of proof:At states where some payoff function is common r-belief, specify that players follow s *. The key is that at these states, each player i r-believes that all other players r-believe the payoffs are common r-belief, so each expects the others to play according to s *.*ΩRegardless of play in the other states, playing this way is a best response, where k is a constant that depends on the set of possible payoff functions.4(1)k −rTo define play at states in */ΩΩconsider an artificial game where players are constrained to play s * in - and pick a NE of this game.*ΩThe overall strategy profile is an interim ε-equilibrium that plays like *s with probability q.To see the role of the infinite state space, consider the"truncated email game"player 2 does not respond after receiving n messages, so there are only 2n states.When 2n occurs: 2 knows it occurs.That is, . {}2(0,1),...(22,21,)(2)H n n =−−n n {}1(0),(1,2),...(21,2)H n =−.()2|(21,2)1p n n n ε−=−, so 2n is a "1-ε truism," and thus it is common 1-ε belief when it occurs.So there is an exact equilibrium where players playA in state 2n.More generally: on a finite state space, if the probability of an event is close to 1, then there is high probability that it is common r belief for r near 1.Not true on infinite state spaces…Lipman, “Finite order implications of the common prior assumption.”His point: there basically aren’t any!All of the "bite" of the CPA is in the tails.Set up: parameter Q that people "care about" States s S ∈,:f S →Θ specifies what the payoffs are at state s. Partitions of S, priors .i H i pPlayer i’s first order beliefs at s: the conditional distribution on Q given s.For B ⊆Θ,1()()i s B d =('|(')|())i i p s f s B h s ÎPlayer i’s second order beliefs: beliefs about Q and other players’ first order beliefs.()21()(){'|(('),('))}|()i i j i s B p s f s s B h d d =Îs and so on.The main point can be seen in his exampleTwo possible values of an unknown parameter r .1q q = o 2qStart with a model w/o common prior, relate it to a model with common prior.Starting model has only two states 12{,}S s s =. Each player has the trivial partition- ie no info beyond the prior.1122()()2/3p s p s ==.example: Player 1 owns an asset whose value is 1 at 1θ and 2 at 2θ; ()i i f s θ=.At each state, 1's expected value of the asset 4/3, 2's is 5/3, so it’s common knowledge that there are gains from trade.Lipman shows we can match the players’ beliefs, beliefs about beliefs, etc. to arbitrarily high order in a common prior model.Fix an integer N. construct the Nth model as followsState space'S ={1,...2}N S ´Common prior is that all states equally likely.The value of θ at (s,k) is determined by the s- component.Now we specify the partitions of each player in such a way that the beliefs, beliefs about beliefs, look like the simple model w/o common prior.1's partition: events112{(,1),(,2),(,1)}...s s s 112{(,21),(,2),(,)}s k s k s k -for k up to ; the “left-over” 12N -2s states go into 122{(,21),...(,2)}N N s s -+.At every event but the last one, 1 thinks the probability of is 2/3.1qThe partition for player 2 is similar but reversed: 221{(,21),(,2),(,)}s k s k s k - for k up to . 12N -And at all info sets but one, player 2 thinks the prob. of is 1/3.1qNow we look at beliefs at the state 1(,1)s .We matched the first-order beliefs (beliefs about θ) by construction)Now look at player 1's second-order beliefs.1 thinks there are 3 possible states 1(,1)s , 1(,2)s , 2(,1)s .At 1(,1)s , player 2 knows {1(,1)s ,2(,1)s ,(,}. 22)s At 1(,2)s , 2 knows . 122{(,2),(,3),(,4)}s s s At 2(,1)s , 2 knows {1(,2)s , 2(,1)s ,(,}. 22)sThe support of 1's second-order beliefs at 1(,1)s is the set of 2's beliefs at these info sets.And at each of them 2's beliefs are (1/3 1θ, 2/3 2θ). Same argument works up to N:The point is that the N-state models are "like" the original one in that beliefs at some states are the same as beliefs in the original model to high but finite order.(Beliefs at other states are very different- namely atθ or 2 is sure the states where 1 is sure that state is2θ.)it’s1Conclusion: if we assume that beliefs at a given state are generated by updating from a common prior, this doesn’t pin down their finite order behavior. So the main force of the CPA is on the entire infinite hierarchy of beliefs.Lipman goes on from this to make a point that is correct but potentially misleading: he says that "almost all" priors are close to a common. I think its misleading because here he uses the product topology on the set of hierarchies of beliefs- a.k.a topology of pointwise convergence.And two types that are close in this product topology can have very different behavior in a NE- so in a sense NE is not continuous in this topology.The email game is a counterexample. “Product Belief Convergence”:A sequence of types converges to if thesequence converges pointwise. That is, if for each k,, in t *i t ,,i i k n k *δδ→.Now consider the expanded version of the email game, where we added the state ∞.Let be the hierarchy of beliefs of player 1 when he has sent n messages, and let be the hierarchy atthe point ∞, where it is common knowledge that the payoff matrix is .in t ,*i t a uClaim: the sequence converges pointwise to . in t ,*i t Proof: At , i’s zero-order beliefs assignprobability 1 to , his first-order beliefs assignprobability 1 to ( and j knows it is ) and so onup to level n-1. Hence as n goes to infinity, thehierarchy of beliefs converges pointwise to common knowledge of .in t a u a u a u a uIn other words, if the number of levels of mutual knowledge go to infinity, then beliefs converge to common knowledge in the product topology. But we know that mutual knowledge to high order is not the same as almost common knowledge, and types that are close in the product topology can play very differently in Nash equilibrium.Put differently, the product topology on countably infinite sequences is insensitive to the tail of the sequence, but we know that the tail of the belief hierarchy can matter.Next : B-D JET 93 "Hierarchies of belief and Common Knowledge”.Here the hierarchies of belief are motivated by Harsanyi's idea of modelling incomplete information as imperfect information.Harsanyi introduced the idea of a player's "type" which summarizes the player's beliefs, beliefs about beliefs etc- that is, the infinite belief hierarchy we were working with in Lipman's paper.In Lipman we were taking the state space Ω as given.Harsanyi argued that given any element of the hierarchy of beliefs could be summarized by a single datum called the "type" of the player, so that there was no loss of generality in working with types instead of working explicitly with the hierarchies.I think that the first proof is due to Mertens and Zamir. B-D prove essentially the same result, but they do it in a much clearer and shorter paper.The paper is much more accessible than MZ but it is still a bit technical; also, it involves some hard but important concepts. (Add hindsight disclaimer…)Review of math definitions:A sequence of probability distributions converges weakly to p ifn p n fdp fdp ®òò for every bounded continuous function f. This defines the topology of weak convergence.In the case of distributions on a finite space, this is the same as the usual idea of convergence in norm.A metric space X is complete if every Cauchy sequence in X converges to a point of X.A space X is separable if it has a countable dense subset.A homeomorphism is a map f between two spaces that is 1-1, and onto ( an isomorphism ) and such that f and f-inverse are continuous.The Borel sigma algebra on a topological space S is the sigma-algebra generated by the open sets. (note that this depends on the topology.)Now for Brandenburger-DekelTwo individuals (extension to more is easy)Common underlying space of uncertainty S ( this is called in Lipman)ΘAssume S is a complete separable metric space. (“Polish”)For any metric space, let ()Z D be all probability measures on Borel field of Z, endowed with the topology of weak convergence. ( the “weak topology.”)000111()()()n n n X S X X X X X X --=D =´D =´DSo n X is the space of n-th order beliefs; a point in n X specifies (n-1)st order beliefs and beliefs about the opponent’s (n-1)st order beliefs.A type for player i is a== 0012(,,,...)()n i i i i n t X d d d =¥=δD0T .Now there is the possibility of further iteration: what about i's belief about j's type? Do we need to add more levels of i's beliefs about j, or is i's belief about j's type already pinned down by i's type ?Harsanyi’s insight is that we don't need to iterate further; this is what B-D prove formally.Coherency: a type is coherent if for every n>=2, 21marg n X n n d d --=.So the n and (n-1)st order beliefs agree on the lower orders. We impose this because it’s not clear how to interpret incoherent hierarchies..Let 1T be the set of all coherent typesProposition (Brandenburger-Dekel) : There is a homeomorphism between 1T and . 0()S T D ´.The basis of the proposition is the following Lemma: Suppose n Z are a collection of Polish spaces and let021201...1{(,,...):(...)1, and marg .n n n Z Z n n D Z Z n d d d d d --´´-=ÎD ´"³=Then there is a homeomorphism0:(nn )f D Z ¥=®D ´This is basically the same as Kolmogorov'sextension theorem- the theorem that says that there is a unique product measure on a countable product space that corresponds to specified marginaldistributions and the assumption that each component is independent.To apply the lemma, let 00Z X =, and 1()n n Z X -=D .Then 0...n n Z Z X ´´= and 00n Z S T ¥´=´.If S is complete separable metric than so is .()S DD is the set of coherent types; we have shown it is homeomorphic to the set of beliefs over state and opponent’s type.In words: coherency implies that i's type determines i's belief over j's type.But what about i's belief about j's belief about i's type? This needn’t be determined by i’s type if i thinks that j might not be coherent. So B-D impose “common knowledge of coherency.”Define T T ´ to be the subset of 11T T ´ where coherency is common knowledge.Proposition (Brandenburger-Dekel) : There is a homeomorphism between T and . ()S T D ´Loosely speaking, this says (a) the “universal type space is big enough” and (b) common knowledge of coherency implies that the information structure is common knowledge in an informal sense: each of i’s types can calculate j’s beliefs about i’s first-order beliefs, j’s beliefs about i’s beliefs about j’s beliefs, etc.Caveats:1) In the continuity part of the homeomorphism the argument uses the product topology on types. The drawbacks of the product topology make the homeomorphism part less important, but theisomorphism part of the theorem is independent of the topology on T.2) The space that is identified as“universal” depends on the sigma-algebra used on . Does this matter?(S T D ´)S T ×Loose ideas and conjectures…• There can’t be an isomorphism between a setX and the power set 2X , so something aboutmeasures as opposed to possibilities is being used.• The “right topology” on types looks more like the topology of uniform convergence than the product topology. (this claim isn’t meant to be obvious. the “right topology” hasn’t yet been found, and there may not be one. But Morris’ “Typical Types” suggests that something like this might be true.)•The topology of uniform convergence generates the same Borel sigma-algebra as the product topology, so maybe B-D worked with the right set of types after all.。
1Lecture 10Applications of SPEReadings•Watson: Strategy_ An introduction to game theory–1rd ed: Ch15-Ch17; 3rd ed: Ch15-Ch17.2•Exercises: Ch15.4, Ch16.4(a)-(c)A Game: Cash in A Hat(Toy Version of Lender and Borrower)☐Player 1 can put $0, $1, or $3 in a hat☐The hat is passed to player 2☐Player 2 can either “match”(i.e. add the same amount) or take the cash☐Payoffs:☐Player 1: 0→0; 1→double if match, -1 if not;3→double if match, -3 if not.☐Player 2: 1.5 if match 1; 2 if match 3; the $ in thehat if takes.4Sequential Move Game☐Player 2 knows player 1’s choice before 2 chooses;☐Player 1 knows that this will be the case.5A Game: Cash in A Hat60, 01, 1.5-1, 13, 2-3, 3131-13-3122Backward Induction7Look forward, work back.0, 01, 1.5-1, 13, 2-3, 3131-13-3122Moral Hazard☐Moral hazard: agent has incentives to do things that are bad for the principal.☐Example in “cash in a hat”: kept the size of loan/project small to reduce the temptation to cheat.8Incentive Design☐Change contract to give incentives not to shirk☐“A small share of a large pie” can be bigger than “a large share of a small pie”.9Incentive Design100, 01, 1.5-1, 11.9, 3.1-3, 3131-13-3122Incentive Contracts☐Piece rates☐Share cropping☐Collateral☐Subtract house from run away payoffs☐Lowers payoffs to borrower at some tree point,yet make the borrower better off☐Change the choices of others in a way that helps you11Collateral120, 01, 1.5-1, 1(-house)3, 2-3, 3(-house)131-13-3122Commitment Strategy☐Getting rid of choices can make me better off ☐Commitment: to have fewer options and itchanges the behavior of others☐Knowledge: the other players must know thesituation changed13Stackelberg’s Duopoly GameConstant Unit Cost and Linear Inverse Demand ☐Players :the two firms (a leader and a follower);☐Timing:☐Firm 1 chooses a quantity ;☐Firm 2 observes and then chooses a quantity ;☐Payoff :Each firm’s payoff is represented by its profit.10q ≥1q 20q ≥14Basic Assumptions☐Constant unit cost :☐Linear inverse demand function :☐Assume :() for all i i i iC q cq q =() if , 0 if iQ Q P Q Q q Q ααα-≤⎧==⎨>⎩∑c α<15Lessons☐Commitment: sunk costs can help☐Spy or having more information can hurt you ☐Key: the other players knew you had moreinformation☐Reason: it can lead other players to take actionsthat hurt you☐First-mover advantage18First-mover advantage☐Yes sometimes: Stackelberg☐But not always —Second mover advantage ☐Rock, paper, scissors☐Information here is helpful☐Sometimes neither first nor second mover advantage ☐Divide a cake with your sibling: I split, you choose.☐It can be first or second mover advantage within the same game depending on setup☐The game of Nim19Chain-store Paradox☐试想一个博弈:在每个省会城市,都有麦当劳连锁店,每个城市都有一个挑战者想进入该行业。
Game Theory Lecture 2A reminderLet G be a finite, two player game of perfect information without chance moves. Theorem (Zermelo, 1913):Either player 1 can force an outcome in T or player 2 can force an outcome in T’A reminder Zermelo’s proof uses Backwards InductionA reminderA game G is strictly Competitive if for any twoterminal nodes a,bab b 2a1An application of Zermelo’s theorem toStrictly Competitive GamesLet a 1,a 2,….a n be the terminal nodes of a strictly competitive game (with no chance moves and with perfectinformation) and let:a n 1a n-1 1 …. 1a 2 1a 1(i.e. a n 2a n-1 2 …. 2a 2 2a 1).?Then there exists k ,n k 1 s.t. player 1 can forcean outcome in a n , a n-1……a kAnd player 2can force an outcome in a k , a k-1……a 1?a n 1a n-1 1.. 1 a k 1 .. 1a 2 1a 1G(s,t)=a kPlayer 1has a strategy swhich forces an outcomebetter or equal to a k ( 1)Player 2has a strategy twhich forces an outcomebetter or equal to a k ( 2)Let w j= a n, a n-1……,a j wn+1=an , an-1…aj…, a2, a1w1w2w jwn+1wnPlayer 1can force an outcome in W 1 = a n , a n-1…,a 1 ,and cannot force an outcome in w n+1= .Let w j = a n , a n-1……,a j w n+1= w 1, w 2, ….w n ,w n+1can force cannot forcecan force ??Let k be the maximal integer s.t. player 1can force an outcome in W kProof :w 1, w 2, … wk , w k+1...,w n+1Player 1 can force Player 1 cannot forceLet k be the maximal integer s.t. player 1can force an outcome in W ka n , a n-1…a k+1 ,a k …, a 2, a 1 w 1w k+1w k Player 2can force an outcome in T -w k+1by Zermelo’s theorem!!!!!a n 1a n-1 1.. 1 a k 1 .. 1a 2 1a 1G(s,t)=a k Player 1has a strategy s which forces an outcome better or equal to a k ( 1)Player 2has a strategy t which forces an outcome better or equal to a k ( 2)Now consider the implications of this result for thestrategic form game s ta kplayer 1’s strategy s guarantees at least a kplayer 2’s strategy t guarantees him at least a k------+++++i.e.at most a k for player 1stak------+++++The point (s,t)is a Saddle pointstak------+++++Given that player 2plays t,Player 1hasno better strategythan sstrategy s is player 1’sbest responseto player 2’s strategy tSimilarly, strategy t is player 2’sbest responseA pair of strategies (s,t)such that eachis a best response to the other isa Nash EquilibriumJohn F. Nash Jr.This definition holds for any game,not only for strict competitive ones12 221WW LWWrlRML121 2Example3R LRrL lbackwards Induction(Zermelo)r( l , r ) ( R, , )2 221WW LWWrlRML1223RLRrLl rAll thosestrategy pairs areNash equilibriaBut there are otherNash equilibria …….( l , r ) ( L,( l , r ) ( L, , )2 221WW LWWrlRML1223RLRrLl rThe strategies obtained bybackwards inductionAre Sub-Game Perfect equilibriain each sub-game they prescribe2 221WW LWWrlRML1223RLRrLl rWhereas, thenon Sub-Game PerfectNash equilibriumprescribes a non equilibrium( l , r ) ( L, , )A Sub-Game Perfect equilibriaprescribes a Nash equilibriumin each sub-gameR. SeltenAwarding the Nobel Prize in Economics -1994Chance MovesNature (player 0),chooses randomly, with known probabilities, among some actions.+ + + = 11/61111111/6123456information setN.S .S .N.S .S .N.S .S .N.S .S .N.S .S .N.S .S .Payoffs:W (when the other dies, or when the other chosenot shoot in his turn)D (when not shooting)L (when dead)1/61111111/6123456N.S .S .N.S .S .N.S .S .N.S .S .N.S .S .N.S .S .Payoffs:W (when the other dies, or when the other didnot shoot in his turn)D (when not shooting)L (when dead)W D L1/61111111/6123456N.S .S .N.S .S .N.S .S .N.S .S .N.S .S .N.S .S .DDDDDDL222221/61111111/6123456N.S .S .N.S .S .N.S .S .N.S .S .N.S .S .N.S .S .DDDDDDL22222N.S .S .DLN.S .S .DN.S .S .DN.S .S .DN.S .S .D。
Mathematical Models: Game TheoryMark FeyUniversity of RochesterThis course is designed to teach graduate students in political science the tools of game theory. The course will cover the standard group of essential concepts and some additional topics that are particularly important in formal theory. In addition, we will cover some specific applications of game theory in political science.Students should have, at a minimum, a mathematical background of algebra (solving equations, graphing functions, etc.) and a basic knowledge of probability. Development of the theory does not require calculus, although some applications will be presented that utilize basic calculus (i.e., derivatives). A very brief summary of the necessary mathematics is in Appendix One of Morrow.Game theory, as with most mathematical topics, is best learned by doing, rather than reading. Thus, an important part of the course will be the problem sets covering the lecture material and readings. These problem sets will count for 60% of the final grade, and a take-home exam at the end of the course will count for 40% of the final grade. Solutions to the problem sets will be covered in class. Auditors are welcome, and those who complete the problem sets and keep up with the lectures and reading will be entitled to seek help with problems and with the material. There are two required texts for the course:James Morrow, 1995. Game Theory for Political Scientists. Princeton University Press.Robert Gibbons, 1992. Game Theory for Applied Economists. Princeton University Press. Other readings will be made available to you for photocopying.ScheduleJune 27Introduction, Basic Assumptions of Rational ChoiceMorrow: Chs. 1-2June 28 Decision Theory, OptimizationJune 29 Representing Games, Strategic Form GamesMorrow: Chs. 3-4June 30 Strategic Form Games129July 3 & 4 No class, but lots of fireworks!July 5 Strategic Form Games, DominanceGibbons: Sec. 1.1July 6 Nash Equilibrium, Mixed StrategiesGibbons: Sec. 1.3July 7 Zero-sum Games, ApplicationsJuly 10 Extensive Form Games, Backwards Induction Morrow: Ch. 5July 11 Subgame Perfection, Forward InductionGibbons: Ch. 2July 12 Bayesian Games, Bayesian EquilibriumMorrow: Ch. 6July 13 Bayesian EquilibriumGibbons: Ch. 3July 14 Perfect Bayesian Equilibrium and Sequential Equilibrium Morrow: Ch. 7Gibbons: Sec. 4.1July 17 Sequential EquilibriumJuly 18 Signaling GamesMorrow: Ch. 8Gibbons: Sec. 4.2July 19 Cheap Talk GamesGibbons: Sec. 4.3July 20 Repeated GamesMorrow: Ch. 9July 21 Applications, Wrap-up130。
Week 11: Game TheoryRequired Reading: Schotter pp. 229 - 260Lecture Plan1. The Static Game TheoryNormal Form GamesSolution Techniques for Solving Static Games Dominant StrategyNash Equilibrium2. Prisoner’s Dilemma3. Decision AnalysisMaximim CriteriaMinimax Criteria4. Dynamic One-Off GamesExtensive Form GamesThe Sub-Game Perfect Nash Equilibrium1. The static Game TheoryStatic games: the players make their move in isolation without knowing what other players have done1.1 Normal Form GamesIn game theory there are two ways in which a game can be represented.1st) The normal form game or strategic form game2nd) The extensive form gameA normal form game is any game where we can identity the following three things:1. Players:2. The strategies available to each player.3. The Payoffs. A payoff is what a player will receive at the endof the game contingent upon the actions of all players in the game.Suppose that two people (A and B) are playing a simple game. A will write one of two words on a piece of paper, “Top” or “Bottom”. At the same time, B will independently write “left” or “right” on a piece of paper. After they do this, the papers will be examined and they will get the payoff depicted in Table 1.Table 1If A says top and B says left, then we examine the top-left corner of the matrix. In this matrix, the payoff to A(B) is the first(Second) entry in the box. For example, if A writes “top” and B writes “left” payoff of A = 1 payoff of B = 2.What is/are the equilibrium outcome(s) of this game?1.2Nash Equilibrium Approach to Solving Static GamesNash equilibrium is first defined by John Nash in 1951 based on the work of Cournot in 1893.A pair of strategy is Nash equilibrium if A's choice is optimal given B's choice, and B's choice is optimal given A's choice. When this equilibrium outcome is reached, neither individual wants to change his behaviour.Finding the Nash equilibrium for any game involves two stages.1) identify each optimal strategy in response to what the other players might do.Given B chooses left, the optimal strategy for A isGiven B chooses right, the optimal strategy for A isGiven A chooses top, the optimal strategy for B isGiven A chooses bottom, the optimal strategy for B isWe show this by underlying the payoff element for each case.2) a Nash equilibrium is identified when all players are player their optimal strategies simultaneouslyIn the case of Table 2,If A chooses top, then the best thing for B to do is to choose left since the payoff to B from choosing left is 1 and the payoff from choosing right is 0. If B chooses left, then the best thing for A to do is to choose top as A will get a payoff of 2 rather than 0.Thus if A chooses top B chooses left. If B chooses left, A chooses top. Therefore we have a Nash equilibrium: each person is making optimal choice, given the other person's choice.If the payoff matrix changes as:Table 2then what is the Nash equilibrium?Table 3If the payoffs are changed as shown in Table 32. Prisoners’ dilemm aPareto Efficiency: An allocation is Pareto efficient if goods cannot be reallocated to make someone better off without making someone else worse off.Two prisoners who were partners in a crime were being questioned in separate rooms. Each prisoner has a choice of confessing to the crime (implicating the other) or denying. If only one confesses, then he would go free and his partner will spend 6 months in prison. If both prisoners deny, then both would be in the prison for 1 month. If both confess, they would both be held for three months. The payoff matrix for this game is depicted in Table 4.Table 4The equilibrium outcome3. Decision AnalysisLet N=1 to 4 a set of possible states of nature, and let S=1 to 4be a set of strategy decided by you. Now you have to decide which strategy you have to choose given the following payoff matrix.Table 5S=YouN=OpponentIn this case you don't care the payoff of your opponent i.e. nature.3.1 The Maximin Decision Rule or Wald criterionWe look for the minimum pay-offs in each choice and then maximising the minimum pay-offLet us highlight the mimimum payoff for each strategy.3.2 The Minimax Decision Rule or Savage criterionOn this rule we need to compute the losses or regret matrix from the payoff matrix. The losses are defined as the difference between the actual payoff and what that payoff would have been had the correct strategy been chosen.Regret/Loss = Max. payoff in each column – actual payoffFor example of N=1 occurs and S=1 is chosen, the actual gain = 2 from Table 3. However, the best action given N=1 is also to choose S=1 which gives the best gain = 2. For (N=1, S=1) regret = 0.If N=1 occurs but S=2 is chosen, the actual gain = 1. However, the best action given N=1 is also to choose S=1 which gives the best gain = 2. For (N=1, S=2) regret = 2-1.Following the similar analysis, we can compute the losses for each N and S and so can compute the regret matrix.Table 6: Regret MatrixAfter computing the regret matrix, we look for the maximum regret of each strategy and then taking the minimum of these.Minimax is still very cautious but less so than the maximin.4. Dynamic one-off GamesA game can be dynamic for two reasons. First, players may be ableto observe the actions of other players before deciding upon theiroptimal response. Second, one-off game may be repeated a number of times.4.1 Extensive Form GamesDynamic games cannot be represented by payoff matrices we have touse a decision tree (extensive form) to represent a dynamic game.Start with the concept of dynamic one-off game the game can beplayed for only one time but players can condition their optimal actions on what other players have done in the past.Suppose that there are two firms (A and B) that are considering whether or not to enter a new market. If both firms enter the market,then they will make a loss of $10 mil. If only one firm enters the market, it will earn a profit of $50 mil. Suppose also that Firm B observes whether Firm A has entered the market before it decides what to do.Any extensive form game has the following four elements in common:Nodes: This is a position where the players must take a decision.The first position is called the initial node, and each node is labelled so as to identify who is making the decision.Branches: These represent the alternative choices that the person faces and so correspond to available actions.Payoff Vectors: These represent the payoffs for each player, with the payoffs listed in the order of players. When we reach a payoffvector the game ends.In period 1, Firm A makes its decisions. This is observed by Firm B which decides to enter or stay out of the market in period 2. In this extensive form game, Firm B’s decision nodes are the sub-game. This means that firm B observes Firm A’s action before making its own decision.4.2 Subgame Perfect Nash EquilibriumSubgame perfect Nash equilibrium is the predicted solution to a dynamic one-off game. From the extensive form of this game, we can observe that there are two subgames, one starting from each of Firm B’s decision nodes.How could we identify the equilibrium outcomes?In applying this principle to this dynamic game, we start with the last period first and work backward through successive nodes until we reach the beginning of the game.Start with the last period of the game first, we have two nodes. At each node, Firm B decides whether or not entering the market based on what Firm A has already done.For example, at the node of “Firm A enters”, Firm B will either make a loss of –$10mil (if it enters) or receive “0” payoff (if it stays out); these are shown by the payoff vectors (-10, -10) and (50, 0). If Firm B is rational, it will stays outThe node “Firm A enters” can be replaced by the vector (50, 0).At the second node “Firm A stays out”, Firm A h as not entered the market. Thus, Firm B will either make a profit of $50mil (if it enters) or receive “0” payoff (if it stays out); these are shown by the payoff vectors (0, 50) and (0, 0). If Firm B is rational, it will enter thus we could rule out the possibility of both firms stay outWe can now move back to the initial node. Here Firm A has to decide whether or not to enter. If Firm B is rational, it is known that the game will never reach the previously “crossed” vectors. Firm A also knows that if it enters, the game will eventually end at (A enters, B stays out) where A gets 50 and B gets 0. On the other hand, if Firm A stays out, the game will end at (A stays out, B enters) where A gets 0 and B gets 50 Firm A should enter the market at the first stage. The eventual outcome is (A enters, B stays out)How to find a subgame perfect equilibrium of a dynamic one-off game?1. Start with the last period of the game cross out the irrelevant payoff vectors.2. Replace the preceding nodes by the uncrossed payoff vectorsuntil you reach the initial node.3. The only uncrossed payoff vector(s) is the subgame perfect Nash equilibrium.。