Part of Advances in Neural Information Processing Systems 33 (NeurIPS 2020)
Gen Li, Yuting Wei, Yuejie Chi, Yuantao Gu, Yuxin Chen
We investigate the sample efficiency of reinforcement learning in a γ-discounted infinite-horizon Markov decision process (MDP) with state space S and action space A, assuming access to a generative model. Despite a number of prior work tackling this problem, a complete picture of the trade-offs between sample complexity and statistical accuracy is yet to be determined. In particular, prior results suffer from a sample size barrier, in the sense that their claimed statistical guarantees hold only when the sample size exceeds at least |S||A|/(1−γ)2 (up to some log factor). The current paper overcomes this barrier by certifying the minimax optimality of model-based reinforcement learning as soon as the sample size exceeds the order of |S||A|/(1−γ) (modulo some log factor). More specifically, a perturbed model-based planning algorithm provably finds an ϵ-optimal policy with an order of |S||A|/((1−γ)3ϵ2) samples (up to log factor) for any 0<ϵ<1/(1−γ). Along the way, we derive improved (instance-dependent) guarantees for model-based policy evaluation. To the best of our knowledge, this work provides the first minimax-optimal guarantee in a generative model that accommodates the entire range of sample sizes (beyond which finding a meaningful policy is information theoretically impossible).