Part of Advances in Neural Information Processing Systems 33 (NeurIPS 2020)
Zakaria Mhammedi, Benjamin Guedj, Robert C. Williamson
Conditional Value at Risk (\textscCVaR) is a coherent risk measure'' which generalizes expectation (reduced to a boundary parameter setting). Widely used in mathematical finance, it is garnering increasing interest in machine learning as an alternate approach to regularization, and as a means for ensuring fairness. This paper presents a generalization bound for learning algorithms that minimize the \textscCVaR of the empirical loss. The bound is of PAC-Bayesian type and is guaranteed to be small when the empirical \textscCVaR is small. We achieve this by reducing the problem of estimating \textscCVaR to that of merely estimating an expectation. This then enables us, as a by-product, to obtain concentration inequalities for \textscCVaR even when the random variable in question is unbounded.