Aligning Predictive Uncertainty with Clarification Questions in Grounded Dialog

Open Access
Authors
Publication date 2023
Host editors
  • H. Bouamor
  • J. Pino
  • K. Bali
Book title The 2023 Conference on Empirical Methods in Natural Language Processing : Findings of the Association for Computational Linguistics: EMNLP 2023
Book subtitle December 6-10, 2023
ISBN (electronic)
  • 9798891760615
Event 2023 Conference on Empirical Methods in Natural Language Processing
Pages (from-to) 14988–14998
Publisher Stroudsburg, PA: Association for Computational Linguistics
Organisations
  • Faculty of Science (FNWI) - Informatics Institute (IVI)
Abstract
Asking for clarification is fundamental to effective collaboration. An interactive artificial agent must know when to ask a human instructor for more information in order to ascertain their goals. Previous work bases the timing of questions on supervised models learned from interactions between humans. Instead of a supervised classification task, we wish to ground the need for questions in the acting agent’s predictive uncertainty. In this work, we investigate if ambiguous linguistic instructions can be aligned with uncertainty in neural models. We train an agent using the T5 encoder-decoder architecture to solve the Minecraft Collaborative Building Task and identify uncertainty metrics that achieve better distributional separation between clear and ambiguous instructions. We further show that well-calibrated prediction probabilities benefit the detection of ambiguous instructions. Lastly, we provide a novel empirical analysis on the relationship between uncertainty and dialog history length and highlight an important property that poses a difficulty for detection.
Document type Conference contribution
Language English
Published at https://doi.org/10.18653/v1/2023.findings-emnlp.999
Downloads
2023.findings-emnlp.999 (Final published version)
Permalink to this page
Back