Sinkhorn AutoEncoders

Open Access
Authors
Publication date 2019
Host editors
  • A. Globerson
  • R. Silva
Book title Proceedings of the Thirty-Fifth Conference on Uncertainty in Artificial Intelligence
Book subtitle UAI 2019, Tel Aviv, Israel, July 22-25, 2019
Event Conference on Uncertainty in Artificial Intelligence 2019
Article number 253
Number of pages 11
Publisher Corvallis, OR: AUAI Press
Organisations
  • Faculty of Humanities (FGw) - Amsterdam Institute for Humanities Research (AIHR)
  • Faculty of Science (FNWI) - Informatics Institute (IVI)
Abstract
Optimal transport offers an alternative to maximum likelihood for learning generative autoencoding models. We show that minimizing the p-Wasserstein distance between the generator and the true data distribution is equivalent to the unconstrained min-min optimization of the p-Wasserstein distance between the encoder aggregated posterior and the prior in latent space, plus a reconstruction error. We also identify the role of its trade-off hyperparameter as the capacity of the generator: its Lipschitz constant. Moreover, we prove that optimizing the encoder over any class of universal approximators, such as deterministic neural networks, is enough to come arbitrarily close to the optimum. We therefore advertise this framework, which holds for any metric space and prior, as a sweet-spot of current generative autoencoding objectives. We then introduce the Sinkhorn auto-encoder (SAE), which approximates and minimizes the p-Wasserstein distance in latent space via backprogation through the Sinkhorn algorithm. SAE directly works on samples, i.e. it models the aggregated posterior as an implicit distribution, with no need for a reparameterization trick for gradients estimations. SAE is thus able to work with different metric spaces and priors with minimal adaptations. We demonstrate the flexibility of SAE on latent spaces with different geometries and priors and compare with other methods on benchmark data sets.
Document type Conference contribution
Note With supplement
Language English
Published at https://arxiv.org/abs/1810.01118 http://auai.org/uai2019/proceedings/papers/253.pdf
Other links https://giorgiop.github.io/assets/paper/2019_UAI.pdf http://auai.org/uai2019/accepted.php https://dblp.org/db/conf/uai/uai2019.html
Downloads
2019_UAI Sinkhorn AutoEncoders (Accepted author manuscript)
Permalink to this page
Back