Bandit Algorithms (original) (raw)

The Bayesian approach to learning starts by choosing a prior probability distribution over the unknown parameters of the world. Then, as the learner makes observation, the prior is updated using Bayes rule to form the posterior, which represents the new Continue Reading

In an earlier post we analyzed an algorithm called Exp3 for kkk-armed adversarial bandits for which the expected regret is bounded by \begin{align*} R_n = \max_{a \in [k]} \E\left[\sum_{t=1}^n y_{tA_t} – y_{ta}\right] \leq \sqrt{2n k \log(k)}\,. \end{align*} The setting of Continue Reading

To revive the content on this blog a little we have decided to highlight some of the new topics covered in the book that we are excited about and that were not previously covered in the blog. In this post Continue Reading

Dear readers After nearly two years since starting to write the blog we have at last completed a first draft of the book, which is to be published by Cambridge University Press. The book is available for free as a Continue Reading

This website has been quiet for some time, but we have not given up on bandits just yet. First up, we recently gave a short tutorial at AAAI that covered the basics of finite-armed stochastic bandits and stochastic linear bandits. Continue Reading

According to the main result of the previous post, given any finite action set cA\cAcA with KKK actions a1,dots,aKinRda_1,\dots,a_K\in \R^da1,dots,aKinRd, no matter how an adversary selects the loss vectors y1,dots,yninRdy_1,\dots,y_n\in \R^dy1,dots,yninRd, as long as the action losses ipak,yt\ip{a_k,y_t}ipak,yt are in Continue Reading

In the next few posts we will consider adversarial linear bandits, which, up to a crude first approximation, can be thought of as the adversarial version of stochastic linear bandits. The discussion of the exact nature of the relationship between Continue Reading

In the last two posts we considered stochastic linear bandits, when the actions are vectors in the ddd-dimensional Euclidean space. According to our previous calculations, under the condition that the expected reward of all the actions are in a fixed Continue Reading

Continuing the previous post, here we give a construction for confidence bounds based on ellipsoidal confidence sets. We also put things together and show bound on the regret of the UCB strategy that uses the constructed confidence bounds. Constructing the Continue Reading

Lower bounds for linear bandits turn out to be more nuanced than the finite-armed case. The big difference is that for linear bandits the shape of the action-set plays a role in the form of the regret, not just the Continue Reading