Dealing with Semantic Underspecification in Multimodal NLP

Open Access
Authors
Publication date 2023
Host editors
  • A. Rogers
  • J. Boyd-Graper
  • N. Okazaki
Book title The 61st Conference of the Association for Computational Linguistics
Book subtitle ACL 2023 : Proceedings of the Conference : July 9-14, 2023
ISBN (electronic)
  • 9781959429722
Event 61st Annual Meeting of the Association for Computational Linguistics, ACL 2023
Volume | Issue number 1
Pages (from-to) 12098-12112
Number of pages 15
Publisher Stroudsburg, PA: Association for Computational Linguistics
Organisations
  • Interfacultary Research - Institute for Logic, Language and Computation (ILLC)
Abstract

Intelligent systems that aim at mastering language as humans do must deal with its semantic underspecification, namely, the possibility for a linguistic signal to convey only part of the information needed for communication to succeed. Consider the usages of the pronoun they, which can leave the gender and number of its referent(s) underspecified. Semantic underspecification is not a bug but a crucial language feature that boosts its storage and processing efficiency. Indeed, human speakers can quickly and effortlessly integrate semantically-underspecified linguistic signals with a wide range of non-linguistic information, e.g., the multimodal context, social or cultural conventions, and shared knowledge. Standard NLP models have, in principle, no or limited access to such extra information, while multimodal systems grounding language into other modalities, such as vision, are naturally equipped to account for this phenomenon. However, we show that they struggle with it, which could negatively affect their performance and lead to harmful consequences when used for applications. In this position paper, we argue that our community should be aware of semantic underspecification if it aims to develop language technology that can successfully interact with human users. We discuss some applications where mastering it is crucial and outline a few directions toward achieving this goal.

Document type Conference contribution
Note With supplementary video
Language English
Published at https://doi.org/10.18653/v1/2023.acl-long.675
Other links https://www.scopus.com/pages/publications/85174389433
Downloads
2023.acl-long.675 (Final published version)
Supplementary materials
Permalink to this page
Back