DP6983 Negatively Correlated Bandits

Author(s): Nicolas Klein, Sven Rady
Publication Date: October 2008
Keyword(s): Bayesian Learning, Exponential Distribution, Markov Perfect Equilibrium, Poisson Process, Strategic Experimentation, Two-Armed Bandit
JEL(s): C73, D83, O32
Programme Areas: Industrial Organization
Link to this Page: www.cepr.org/active/publications/discussion_papers/dp.php?dpno=6983

We analyze a two-player game of strategic experimentation with two-armed bandits. Each player has to decide in continuous time whether to use a safe arm with a known payoff or a risky arm whose likelihood of delivering payoffs is initially unknown. The quality of the risky arms is perfectly negatively correlated between players. In marked contrast to the case where both risky arms are of the same type, we find that learning will be complete in any Markov perfect equilibrium if the stakes exceed a certain threshold, and that all equilibria are in cutoff strategies. For low stakes, the equilibrium is unique, symmetric, and coincides with the planner's solution. For high stakes, the equilibrium is unique, symmetric, and tantamount to myopic behavior. For intermediate stakes, there is a continuum of equilibria.