Meta-Thompson Sampling. Konobeev, M., Zaheer, M., Hsu, C., Mladenov, M., Boutilier, C., & Szepesvári, C. In pages 5884–5893.
Meta-Thompson Sampling [pdf]Paper  Meta-Thompson Sampling [link]Link  abstract   bibtex   
Efficient exploration in bandits is a fundamental online learning problem. We propose a variant of Thompson sampling that learns to explore better as it interacts with bandit instances drawn from an unknown prior. The algorithm meta-learns the prior and thus we call it MetaTS. We propose several efficient implementations of MetaTS and analyze it in Gaussian bandits. Our analysis shows the benefit of meta-learning and is of a broader interest, because we derive a novel prior-dependent Bayes regret bound for Thompson sampling. Our theory is complemented by empirical evaluation, which shows that MetaTS quickly adapts to the unknown prior.

Downloads: 0