Part of Advances in Neural Information Processing Systems 35 (NeurIPS 2022) Main Conference Track
Gene Li, Cong Ma, Nati Srebro
We present a family {ˆπp}p≥1 of pessimistic learning rules for offline learning of linear contextual bandits, relying on confidence sets with respect to different ℓp norms, where ˆπ2 corresponds to Bellman-consistent pessimism (BCP), while ˆπ∞ is a novel generalization of lower confidence bound (LCB) to the linear setting. We show that the novel ˆπ∞ learning rule is, in a sense, adaptively optimal, as it achieves the minimax performance (up to log factors) against all ℓq-constrained problems, and as such it strictly dominates all other predictors in the family, including ˆπ2.