Context Does Matter: Implications for Crowdsourced Evaluation Labels in Task-Oriented Dialogue Systems

Open Access
Authors
Publication date 2024
Host editors
  • K. Duh
  • H. Gomez
  • S. Bethard
Book title Findings of the Association for Computational Linguistics: NAACL 2024: Findings
Book subtitle Findings 2024 : June 16-21, 2024
ISBN (electronic)
  • 9798891761193
Event 2024 Annual Conference of the North American Association for Computational Linguistics: Findings
Pages (from-to) 1258–1273
Publisher Kerrville, TX: Association for Computational Linguistics
Organisations
  • Faculty of Science (FNWI) - Informatics Institute (IVI)
Abstract
Crowdsourced labels play a crucial role in evaluating task-oriented dialogue systems (TDSs). Obtaining high-quality and consistent ground-truth labels from annotators presents challenges. When evaluating a TDS, annotators must fully comprehend the dialogue before providing judgments. Previous studies suggest using only a portion of the dialogue context in the annotation process. However, the impact of this limitation on label quality remains unexplored. This study investigates the influence of dialogue context on annotation quality, considering the truncated context for relevance and usefulness labeling. We further propose to use large language models ( LLMs) to summarize the dialogue context to provide a rich and short description of the dialogue context and study the impact of doing so on the annotator’s performance. Reducing context leads to more positive ratings. Conversely, providing the entire dialogue context yields higher-quality relevance ratings but introduces ambiguity in usefulness ratings. Using the first user utterance as context leads to consistent ratings, akin to those obtained using the entire dialogue, with significantly reduced annotation effort. Our findings show how task design, particularly the availability of dialogue context, affects the quality and consistency of crowdsourced evaluation labels.
Document type Conference contribution
Language English
Published at https://doi.org/10.18653/v1/2024.findings-naacl.80
Downloads
2024.findings-naacl.80 (Final published version)
Permalink to this page
Back