Refer, Reuse, Reduce Generating Subsequent References in Visual and Conversational Contexts

Open Access
Authors
Publication date 2020
Host editors
  • B. Webber
  • T. Cohn
  • Y. He
  • Y. Liu
Book title 2020 Conference on Empirical Methods in Natural Language Processing
Book subtitle EMNLP 2020 : proceedings of the conference : November 16-20, 2020
ISBN (electronic)
  • 9781952148606
Event 2020 Conference on Empirical Methods in Natural Language Processing
Pages (from-to) 4350-4368
Publisher Stroudsburg, PA: The Association for Computational Linguistics
Organisations
  • Interfacultary Research - Institute for Logic, Language and Computation (ILLC)
  • Faculty of Science (FNWI)
Abstract
Dialogue participants often refer to entities or situations repeatedly within a conversation, which contributes to its cohesiveness. Subsequent references exploit the common ground accumulated by the interlocutors and hence have several interesting properties, namely, they tend to be shorter and reuse expressions that were effective in previous mentions. In this paper, we tackle the generation of first and subsequent references in visually grounded dialogue. We propose a generation model that produces referring utterances grounded in both the visual and the conversational context. To assess the referring effectiveness of its output, we also implement a reference resolution system. Our experiments and analyses show that the model produces better, more effective referring utterances than a model not grounded in the dialogue context, and generates subsequent references that exhibit linguistic patterns akin to humans.
Document type Conference contribution
Language English
Related dataset The PhotoBook Task and Dataset
Published at https://doi.org/10.18653/v1/2020.emnlp-main.353
Downloads
2020.emnlp-main.353 (Final published version)
Permalink to this page
Back