Loading [MathJax]/jax/output/CommonHTML/jax.js

Risk-Sensitive Reinforcement Learning: Near-Optimal Risk-Sample Tradeoff in Regret

Part of Advances in Neural Information Processing Systems 33 (NeurIPS 2020)

AuthorFeedback Bibtex MetaReview Paper Review Supplemental

Authors

Yingjie Fei, Zhuoran Yang, Yudong Chen, Zhaoran Wang, Qiaomin Xie

Abstract

We study risk-sensitive reinforcement learning in episodic Markov decision processes with unknown transition kernels, where the goal is to optimize the total reward under the risk measure of exponential utility. We propose two provably efficient model-free algorithms, Risk-Sensitive Value Iteration (RSVI) and Risk-Sensitive Q-learning (RSQ). These algorithms implement a form of risk-sensitive optimism in the face of uncertainty, which adapts to both risk-seeking and risk-averse modes of exploration. We prove that RSVI attains an \ensuremath{\tilde{O}\big(\lambda(|\beta| H^2) \cdot \sqrt{H^{3} S^{2}AT} \big)}regret,whileRSQattainsan\ensuremath{\tilde{O}\big(\lambda(|\beta| H^2) \cdot \sqrt{H^{4} SAT} \big)}regret,where\lambda(u) = (e^{3u}-1)/uforu>0.Intheabove,\betaistheriskparameteroftheexponentialutilityfunction,Sthenumberofstates,Athenumberofactions,Tthetotalnumberoftimesteps,andHtheepisodelength.Ontheflipside,weestablisharegretlowerboundshowingthattheexponentialdependenceon|\beta|andHisunavoidableforanyalgorithmwithan\tilde{O}(\sqrt{T})regret(evenwhentheriskobjectiveisonthesamescaleastheoriginalreward),thuscertifyingthenearoptimalityoftheproposedalgorithms.Ourresultsdemonstratethatincorporatingriskawarenessintoreinforcementlearningnecessitatesanexponentialcostin|\beta|andH$, which quantifies the fundamental tradeoff between risk sensitivity (related to aleatoric uncertainty) and sample efficiency (related to epistemic uncertainty). To the best of our knowledge, this is the first regret analysis of risk-sensitive reinforcement learning with the exponential utility.