Discussion paper

DP19058 Disentangling Exploration from Exploitation

Starting from Robbins (1952), the literature on experimentation via multi-armed bandits has wed exploration and exploitation. Nonetheless, in many applications, agents’ exploration and exploitation need not be intertwined: a policymaker may assess new policies different than the status quo; an investor may evaluate projects outside her portfolio. We characterize the optimal experimentation policy when exploration and exploitation are disentangled in the case of Poisson bandits, allowing for general news structures. The optimal policy features complete learning asymptotically, exhibits lots of persistence, but cannot be identified by an index à la Gittins. Disentanglement is particularly valuable for intermediate parameter values.


Lizzeri, A and L Yariv (2024), ‘DP19058 Disentangling Exploration from Exploitation‘, CEPR Discussion Paper No. 19058. CEPR Press, Paris & London. https://cepr.org/publications/dp19058