Processing math: 42%

PLUGIn: A simple algorithm for inverting generative models with recovery guarantees

Part of Advances in Neural Information Processing Systems 34 (NeurIPS 2021)

Bibtex Paper Reviews And Public Comment » Supplemental

Authors

Babhru Joshi, Xiaowei Li, Yaniv Plan, Ozgur Yilmaz

Abstract

We consider the problem of recovering an unknown latent code vector under a known generative model. For a d-layer deep generative network G:Rn0Rnd with ReLU activation functions, let the observation be G(x)+ϵ where ϵ is noise. We introduce a simple novel algorithm, Partially Linearized Update for Generative Inversion (PLUGIn), to estimate x (and thus G(x)). We prove that, when weights are Gaussian and layer widths ni (up to log factors), the algorithm converges geometrically to a neighbourhood of x with high probability. Note the inequality on layer widths allows n_i>n_{i+1} when i\geq 1. To our knowledge, this is the first such result for networks with some contractive layers. After a sufficient number of iterations, the estimation errors for both x and \mathcal{G}(x) are at most in the order of \sqrt{4^dn_0/n_d} \|\epsilon\|. Thus, the algorithm can denoise when the expansion ratio n_d/n_0 is large. Numerical experiments on synthetic data and real data are provided to validate our theoretical results and to illustrate that the algorithm can effectively remove artifacts in an image.