Do Pre-Trained Language Models Detect and Understand Semantic Underspecification? Ask the DUST!

Open Access
Authors
Publication date 2024
Host editors
  • L.-W. Ku
  • A. Martins
  • V. Srikumar
Book title The 62nd Annual Meeting of the Association for Computational Linguistics : Findings of the Association for Computational Linguistics: ACL 2024
Book subtitle ACL 2024 : August 11-16, 2024
ISBN (electronic)
  • 9798891760998
Event Findings of the 62nd Annual Meeting of the Association for Computational Linguistics, ACL 2024
Pages (from-to) 9598-9613
Publisher Kerrville, TX: Association for Computational Linguistics
Organisations
  • Interfacultary Research - Institute for Logic, Language and Computation (ILLC)
  • Faculty of Science (FNWI) - Informatics Institute (IVI)
Abstract
In everyday language use, speakers frequently utter and interpret sentences that are semantically underspecified, namely, whose content is insufficient to fully convey their message or interpret them univocally. For example, to interpret the underspecified sentence “Don't spend too much”, which leaves implicit what (not) to spend, additional linguistic context or outside knowledge is needed. In this work, we propose a novel Dataset of semantically Underspecified Sentences grouped by Type (DUST) and use it to study whether pre-trained language models (LMs) correctly identify and interpret underspecified sentences. We find that newer LMs are reasonably able to identify underspecified sentences when explicitly prompted. However, interpreting them correctly is much harder for any LMs. Our experiments show that when interpreting underspecified sentences, LMs exhibit little uncertainty, contrary to what theoretical accounts of underspecification would predict. Overall, our study reveals limitations in current models' processing of sentence semantics and highlights the importance of using naturalistic data and communicative scenarios when evaluating LMs' language capabilities.
Document type Conference contribution
Language English
Published at https://doi.org/10.48550/arXiv.2402.12486 https://doi.org/10.18653/v1/2024.findings-acl.572
Other links https://www.scopus.com/pages/publications/85205323550
Downloads
2024.findings-acl.572 (Final published version)
Permalink to this page
Back