Improving Variational Autoencoders with Inverse Autoregressive Flow

Open Access
Authors
  • D. Kingma
  • T. Salimans
  • R. Josefowicz
  • X. Chen
Publication date 2017
Host editors
  • D.D. Lee
  • U. von Luxburg
  • R. Garnett
  • M. Sugiyama
  • I. Guyon
Book title 30th Annual Conference on Neural Information Processing Systems 2016
Book subtitle Barcelona, Spain, 5-10 December 2016
Series Advances in Neural Information Processing Systems
Event Advances in Neural Information Processing Systems 2016
Volume | Issue number 7
Pages (from-to) 4743-4751
Publisher Red Hook, NY: Curran Associates
Organisations
  • Faculty of Science (FNWI) - Informatics Institute (IVI)
Abstract
The framework of normalizing flows provides a general strategy for flexible variational inference of posteriors over latent variables. We propose a new type of normalizing flow, inverse autoregressive flow (IAF), that, in contrast to earlier published flows, scales well to high-dimensional latent spaces. The proposed flow consists of a chain of invertible transformations, where each transformation is based on an autoregressive neural network. In experiments, we show that IAF significantly improves upon diagonal Gaussian approximate posteriors. In addition, we demonstrate that a novel type of variational autoencoder, coupled with IAF, is competitive with neural autoregressive models in terms of attained log-likelihood on natural images, while allowing significantly faster synthesis.
Document type Conference contribution
Note Preprint with title: Improved Variational Inference with Inverse Autoregressive Flow. - With supplemental data
Language English
Published at https://arxiv.org/abs/1606.04934 https://papers.nips.cc/paper/6581-improved-variational-inference-with-inverse-autoregressive-flow
Other links http://www.proceedings.com/34099.html
Downloads
Permalink to this page
Back