Part of Advances in Neural Information Processing Systems 33 (NeurIPS 2020)
Kaiqing Zhang, Sham Kakade, Tamer Basar, Lin Yang
Model-based reinforcement learning (RL), which finds an optimal policy using an empirical model, has long been recognized as one of the cornerstones of RL. It is especially suitable for multi-agent RL (MARL), as it naturally decouples the learning and the planning phases, and avoids the non-stationarity problem when all agents are improving their policies simultaneously using samples. Though intuitive and widely-used, the sample complexity of model-based MARL algorithms has been investigated relatively much less often. In this paper, we aim to address the fundamental open question about the sample complexity of model-based MARL. We study arguably the most basic MARL setting: two-player discounted zero-sum Markov games, given only access to a generative model of state transition. We show that model-based MARL achieves a sample complexity of ~\cO(|\cS||\cA||\cB|(1−γ)−3ϵ−2) for finding the Nash equilibrium (NE) \emph{value} up to some ϵ error, and the ϵ-NE \emph{policies}, where γ is the discount factor, and \cS,\cA,\cB denote the state space, and the action spaces for the two agents. We also show that this method is near-minimax optimal with a tight dependence on 1−γ and |\cS| by providing a lower bound of Ω(|\cS|(|\cA|+|\cB|)(1−γ)−3ϵ−2). Our results justify the efficiency of this simple model-based approach in the multi-agent RL setting.