The Variational Fair Autoencoder

Open Access
Authors
  • R. Zemel
Publication date 2016
Book title ICLR 2016: International Conference on Learning Representations: May 2-4, 2016, San Juan, Puerto Rico. Accepted papers (Conference Track)
Event 4th International Conference on Learning Representations
Number of pages 11
Publisher Computational and Biological Learning Society
Organisations
  • Faculty of Science (FNWI)
  • Faculty of Science (FNWI) - Informatics Institute (IVI)
Abstract
We investigate the problem of learning representations that are invariant to certain nuisance or sensitive factors of variation in the data while retaining as much of the remaining information as possible. Our model is based on a variational autoencoding architecture with priors that encourage independence between sensitive and latent factors of variation. Any subsequent processing, such as classification, can then be performed on this purged latent representation. To remove any remaining dependencies we incorporate an additional penalty term based on the "Maximum Mean Discrepancy" (MMD) measure. We discuss how these architectures can be efficiently trained on data and show in experiments that this method is more effective than previous work in removing unwanted sources of variation while maintaining informative latent representations.
Document type Conference contribution
Note Papers published on ArXiv
Language English
Published at https://arxiv.org/abs/1511.00830
Downloads
1511.00830v5.pd (Final published version)
Permalink to this page
Back