Loading [MathJax]/jax/output/CommonHTML/jax.js

Pessimism for Offline Linear Contextual Bandits using p Confidence Sets

Part of Advances in Neural Information Processing Systems 35 (NeurIPS 2022) Main Conference Track

Bibtex Paper Supplemental

Authors

Gene Li, Cong Ma, Nati Srebro

Abstract

We present a family {ˆπp}p1 of pessimistic learning rules for offline learning of linear contextual bandits, relying on confidence sets with respect to different p norms, where ˆπ2 corresponds to Bellman-consistent pessimism (BCP), while ˆπ is a novel generalization of lower confidence bound (LCB) to the linear setting. We show that the novel ˆπ learning rule is, in a sense, adaptively optimal, as it achieves the minimax performance (up to log factors) against all q-constrained problems, and as such it strictly dominates all other predictors in the family, including ˆπ2.