Finite-time Analysis of the Multiarmed Bandit Problem (original) (raw)

Abstract

Reinforcement learning policies face the exploration versus exploitation dilemma, i.e. the search for a balance between exploring the environment to find profitable actions while taking the empirically best action as often as possible. A popular measure of a policy's success in addressing this dilemma is the regret, that is the loss due to the fact that the globally optimal policy is not followed all the times. One of the simplest examples of the exploration/exploitation dilemma is the multi-armed bandit problem. Lai and Robbins were the first ones to show that the regret for this problem has to grow at least logarithmically in the number of plays. Since then, policies which asymptotically achieve this regret have been devised by Lai and Robbins and many others. In this work we show that the optimal logarithmic regret is also achievable uniformly over time, with simple and efficient policies, and for all reward distributions with bounded support.

Article PDF

References

Download references

Author information

Authors and Affiliations

  1. University of Technology Graz, A-8010, Graz, Austria
    Peter Auer
  2. DTI, University of Milan, via Bramante 65, I-26013, Crema, Italy
    Nicolò Cesa-Bianchi
  3. Lehrstuhl Informatik II, Universität Dortmund, D-44221, Dortmund, Germany
    Paul Fischer

Authors

  1. Peter Auer
  2. Nicolò Cesa-Bianchi
  3. Paul Fischer

Rights and permissions

About this article

Cite this article

Auer, P., Cesa-Bianchi, N. & Fischer, P. Finite-time Analysis of the Multiarmed Bandit Problem.Machine Learning 47, 235–256 (2002). https://doi.org/10.1023/A:1013689704352

Download citation