Few-shot learning for opinion summarization

Open Access
Authors
Publication date 2020
Host editors
  • B. Webber
  • T. Cohn
  • Y. Ye
  • Y. Liu
Book title 2020 Conference on Empirical Methods in Natural Language Processing
Book subtitle EMNLP 2020 : proceedings of the conference : November 16-20, 2020
ISBN (electronic)
  • 9781952148606
Event 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020
Pages (from-to) 4119-4135
Number of pages 17
Publisher Stroudsburg, PA: The Association for Computational Linguistics
Organisations
  • Interfacultary Research - Institute for Logic, Language and Computation (ILLC)
Abstract

Opinion summarization is the automatic creation of text reflecting subjective information expressed in multiple documents, such as user reviews of a product. The task is practically important and has attracted a lot of attention. However, due to the high cost of summary production, datasets large enough for training supervised models are lacking. Instead, the task has been traditionally approached with extractive methods that learn to select text fragments in an unsupervised or weakly-supervised way. Recently, it has been shown that abstractive summaries, potentially more fluent and better at reflecting conflicting information, can also be produced in an unsupervised fashion. However, these models, not being exposed to actual summaries, fail to capture their essential properties. In this work, we show that even a handful of summaries is sufficient to bootstrap generation of the summary text with all expected properties, such as writing style, informativeness, fluency, and sentiment preservation. We start by training a conditional Transformer language model to generate a new product review given other available reviews of the product. The model is also conditioned on review properties that are directly related to summaries; the properties are derived from reviews with no manual effort. In the second stage, we fine-tune a plug-in module that learns to predict property values on a handful of summaries. This lets us switch the generator to the summarization mode. We show on Amazon and Yelp datasets that our approach substantially outperforms previous extractive and abstractive methods in automatic and human evaluation.

Document type Conference contribution
Language English
Published at https://doi.org/10.18653/v1/2020.emnlp-main.337
Other links https://github.com/abrazinskas/FewSum https://slideslive.com/38938830/ https://www.scopus.com/pages/publications/85106191090
Downloads
2020.emnlp-main.337 (Final published version)
Permalink to this page
Back