Building and Evaluating Open-Domain Dialogue Corpora with Clarifying Questions
| Authors |
|
|---|---|
| Publication date | 2021 |
| Host editors |
|
| Book title | 2021 Conference on Empirical Methods in Natural Language Processing |
| Book subtitle | EMNLP 2021 : proceedings of the conference : November 7-11, 2021 |
| ISBN (electronic) |
|
| Event | 2021 Conference on Empirical Methods in Natural Language Processing |
| Pages (from-to) | 4473–4484 |
| Publisher | Stroudsburg, PA: The Association for Computational Linguistics |
| Organisations |
|
| Abstract |
Enabling open-domain dialogue systems to ask clarifying questions when appropriate is an important direction for improving the quality of the system response. Namely, for cases when a user request is not specific enough for a conversation system to provide an answer right away, it is desirable to ask a clarifying question to increase the chances of retrieving a satisfying answer. To address the problem of ‘asking clarifying questions in open-domain dialogues’: (1) we collect and release a new dataset focused on open-domain single- and multi-turn conversations, (2) we benchmark several state-of-the-art neural baselines, and (3) we propose a pipeline consisting of offline and online steps for evaluating the quality of clarifying questions in various dialogues. These contributions are suitable as a foundation for further research.
|
| Document type | Conference contribution |
| Note | With supplementary video |
| Language | English |
| Published at | https://doi.org/10.18653/v1/2021.emnlp-main.367 |
| Downloads |
2021.emnlp-main.367
(Final published version)
|
| Supplementary materials | |
| Permalink to this page | |
