Accelerating Approximate Thompson Sampling with Underdamped Langevin Monte Carlo

Haoyang Zheng, Wei Deng, Christian Moya, Guang Lin, in AISTATS, 2024.

Abstract: Approximate Thompson sampling with Langevin Monte Carlo broadens its reach from Gaussian posterior sampling to encompass more general smooth posteriors. However, it still encounters scalability issues in high-dimensional problems when demanding high accuracy. To address this, we propose an approximate Thompson sampling strategy, utilizing underdamped Langevin Monte Carlo, where the latter is the go-to workhorse for simulations of high-dimensional posteriors. Based on the standard smoothness and log-concavity conditions, we study the accelerated posterior concentration and sampling using a specific Lyapunov construction. This design improves the sample complexity for realizing logarithmic regrets from $\mathcal{\tilde O}(d)$ to $\mathcal{\tilde O}(\sqrt{d})$ and improves the regrets. The scalability and robustness of our algorithm are also empirically validated through synthetic experiments in high-dimensional bandit problems.

[link] [arxiv] [code] [poster]