The multi-armed bandit, with constraints (original) (raw)
2012, Annals of Operations Research
The early sections of this paper present an analysis of a Markov decision model that is known as the multi-armed bandit under the assumption that the utility function of the decision maker is either linear or exponential. The analysis includes efficient procedures for computing the expected utility associated with the use of a priority policy and for identifying a priority policy that is optimal. The methodology in these sections is novel, building on the use of elementary row operations. In the later sections of this paper, the analysis is adapted to accommodate constraints that link the bandits. It was demonstrated in [12, 10] that, given each multi-state, it is optimal to play any Markov chain (bandit) whose current state has the largest index (lowest label). Following [12, 10], the multi-armed bandit problem has stimulated research in control theory, economics, probability, and operations research. A sampling of noteworthy papers includes Bergemann and Välimäkim [2], Bertsimas and Niño-Mora [4], El Karoui and Karatzas [8], Katehakis and Veinott [15], Schlag [17], Sonin [18], Tsisiklis [19], Variaya, Walrand and Buyukkoc [20], Weber [22], and Whittle [24],. Books on the subject (that list many references) include Berry and Fristedt [3], Gittins [11], Gittins, Glazebrook and Weber [13]. The last and most recent of these books provides a status report on the multi-armed bandit that is almost up-to-date. An implication of the analysis in [12, 10] is that the largest of all of the indices equals the maximum over all states of the ratio r(i)/(1 − c), where r(i) denotes the expectation of the reward that is earned if state i's bandit is played once while state i is observed and where c is the discount factor. In 1994, Tsitsiklis [19]