Learning to Ask Informative Questions: Enhancing LLMs with Preference Optimization and Expected Information Gain

Open Access
Authors
Publication date 2024
Host editors
  • Y. Al-Onaizan
  • M. Bansal
  • Y.-N. Chen
Book title The 2024 Conference on Empirical Methods in Natural Language Processing : Findings of EMNLP 2024
Book subtitle EMNLP 2024 : November 12-16, 2024
ISBN (electronic)
  • 9798891761681
Event 2024 Conference on Empirical Methods in Natural Language Processing
Pages (from-to) 5064-5074
Number of pages 11
Publisher Kerrville, TX: Association for Computational Linguistics
Organisations
  • Interfacultary Research - Institute for Logic, Language and Computation (ILLC)
Abstract
Questions are essential tools for acquiring the necessary information to complete information-seeking tasks. However, large language models (LLMs), especially open-source models, often perform poorly in generating informative questions, as measured by expected information gain (EIG). In this paper, we propose a method to enhance the informativeness of LLM-generated questions in 20-question game dialogues. We sample multiple questions from the same model (LLaMA 2-Chat 7B) for each game and create pairs of low-EIG and high-EIG questions to apply a Direct Preference Optimization (DPO) algorithm. Our results show that this method produces more effective questions (in terms of EIG), even in domains different from those used to train the DPO model.
Document type Conference contribution
Language English
Published at https://doi.org/10.18653/v1/2024.findings-emnlp.291
Downloads
2024.findings-emnlp.291 (Final published version)
Permalink to this page
Back