Part of Advances in Neural Information Processing Systems 33 (NeurIPS 2020)
Luo Luo, Haishan Ye, Zhichao Huang, Tong Zhang
We consider nonconvex-concave minimax optimization problems of the form min, where f is strongly-concave in \bf y but possibly nonconvex in \bf x and {\mathcal Y} is a convex and compact set. We focus on the stochastic setting, where we can only access an unbiased stochastic gradient estimate of f at each iteration. This formulation includes many machine learning applications as special cases such as robust optimization and adversary training. We are interested in finding an {\mathcal O}(\varepsilon)-stationary point of the function \Phi(\cdot)=\max_{\bf y\in{\mathcal Y}} f(\cdot, {\bf y}). The most popular algorithm to solve this problem is stochastic gradient decent ascent, which requires \mathcal O(\kappa^3\varepsilon^{-4}) stochastic gradient evaluations, where \kappa is the condition number. In this paper, we propose a novel method called Stochastic Recursive gradiEnt Descent Ascent (SREDA), which estimates gradients more efficiently using variance reduction. This method achieves the best known stochastic gradient complexity of {\mathcal O}(\kappa^3\varepsilon^{-3}), and its dependency on \varepsilon is optimal for this problem.