f-VAEGAN-D2: A Feature Generating Framework for Any-Shot Learning

Authors
Publication date 2019
Book title 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition
Book subtitle proceedings : 16-20 June 2019, Long Beach, California
ISBN
  • 9781728132945
ISBN (electronic)
  • 9781728132938
Series CVPR
Event IEEE Conference on Computer Vision and Pattern Recognition
Pages (from-to) 10267-10276
Publisher Los Alamitos, CA: IEEE Computer Society
Organisations
  • Faculty of Science (FNWI) - Informatics Institute (IVI)
Abstract
When labeled training data is scarce, a promising data augmentation approach is to generate visual features of unknown classes using their attributes. To learn the class conditional distribution of CNN features, these models rely on pairs of image features and class attributes. Hence, they can not make use of the abundance of unlabeled data samples. In this paper, we tackle any-shot learning problems i.e. zero-shot and few-shot, in a unified feature generating framework that operates in both inductive and transductive learning settings. We develop a conditional generative model that combines the strength of VAE and GANs and in addition, via an unconditional discriminator, learns the marginal feature distribution of unlabeled images. We empirically show that our model learns highly discriminative CNN features for five datasets, i.e. CUB, SUN, AWA and ImageNet, and establish a new state-of-the-art in any-shot learning, i.e. inductive and transductive (generalized) zero- and few-shot learning settings. We also demonstrate that our learned features are interpretable: we visualize them by inverting them back to the pixel space and we explain them by generating textual arguments of why they are associated with a certain label.
Document type Conference contribution
Language English
Published at https://doi.org/10.1109/CVPR.2019.01052
Other links http://www.proceedings.com/52034.html
Permalink to this page
Back