Quantifiers in a Multimodal World: Hallucinating Vision with Language and Sound

Open Access
Authors
Publication date 2019
Host editors
  • E. Chersoni
  • C. Jacobs
  • A. Lenci
  • T. Linzen
  • L. Prévot
  • E. Santus
Book title Cognitive Modeling and Computational Linguistics
Book subtitle NAACL HLT 2019 : proceedings of the workshop : June 7, 2019, Minneapolis, USA
ISBN (electronic)
  • 9781948087964
Event Workshop on Cognitive Modeling and Computational Linguistics at NAACL 2019
Pages (from-to) 105-116
Publisher Stroudsburg, PA: The Association for Computational Linguistics
Organisations
  • Interfacultary Research - Institute for Logic, Language and Computation (ILLC)
Abstract
Inspired by the literature on multisensory integration, we develop a computational model to ground quantifiers in perception. The model learns to pick, out of nine quantifiers (‘few’, ‘many’, ‘all’, etc.), the one that is more likely to describe the percent of animals in a visual-auditory input containing both animals and artifacts. We show that relying on concurrent sensory inputs increases model performance on the quantification task. Moreover, we evaluate the model in a situation in which only the auditory modality is given, while the visual one is ‘hallucinanted’ either from the auditory input itself or from a linguistic caption describing the quantity of entities in the auditory input. This way, the model exploits prior associations between modalities. We show that the model profits from the prior knowledge and outperforms the auditory-only setting.
Document type Conference contribution
Language English
Published at https://doi.org/10.18653/v1/W19-2912
Downloads
W19-2912 (Final published version)
Permalink to this page
Back