RAcQUEt: Unveiling the Dangers of Overlooked Referential Ambiguity in Visual LLMs
| Authors |
|
|---|---|
| Publication date | 2025 |
| Host editors |
|
| Book title | The 2025 Conference on Empirical Methods in Natural Language Processing : Proceedings of the Conference |
| Book subtitle | EMNLP 2025 : November 4-9, 2025 |
| ISBN (electronic) |
|
| Event | 30th Conference on Empirical Methods in Natural Language Processing, EMNLP 2025 |
| Pages (from-to) | 23627–23647 |
| Publisher | Kerrville, TX: Association for Computational Linguistics |
| Organisations |
|
| Abstract |
Ambiguity resolution is key to effective communication. While humans effortlessly address ambiguity through conversational grounding strategies, the extent to which current language models can emulate these strategies remains unclear. In this work, we examine referential ambiguity in image-based question answering by introducing RAcQUEt, a carefully curated dataset targeting distinct aspects of ambiguity. Through a series of evaluations, we reveal significant limitations and problems of overconfidence of state-of-the-art large multimodal language models in addressing ambiguity in their responses. The overconfidence issue becomes particularly relevant for RAcQUEt-BIAS, a subset designed to analyze a critical yet underexplored problem: failing to address ambiguity leads to stereotypical, socially biased responses. Our results underscore the urgency of equipping models with robust strategies to deal with uncertainty without resorting to undesirable stereotypes.
|
| Document type | Conference contribution |
| Note | With checklist |
| Language | English |
| Published at | https://doi.org/10.18653/v1/2025.emnlp-main.1206 |
| Downloads |
2025.emnlp-main.1206
(Final published version)
|
| Supplementary materials | |
| Permalink to this page | |