State-of-the-art generalisation research in NLP A taxonomy and review

Open Access
Authors
  • Y. Elazar
  • T. Pimentel
  • C. Christodoulopoulos
  • K. Lasri
  • N. Saphra
  • A. Sinclair
  • D. Ulmer
  • F. Schottmann
  • K. Batsuren
  • K. Sun
  • K. Sinha
  • L. Khalatbari
  • M. Ryskina
  • R. Frieske
  • R. Cotterell
  • Z. Jin
Publication date 06-10-2022
Edition v1
Number of pages 107
Publisher ArXiv
Organisations
  • Interfacultary Research - Institute for Logic, Language and Computation (ILLC)
Abstract
The ability to generalise well is one of the primary desiderata of natural language processing (NLP). Yet, what 'good generalisation' entails and how it should be evaluated is not well understood, nor are there any evaluation standards for generalisation. In this paper, we lay the groundwork to address both of these issues. We present a taxonomy for characterising and understanding generalisation research in NLP. Our taxonomy is based on an extensive literature review of generalisation research, and contains five axes along which studies can differ: their main motivation, the type of generalisation they investigate, the type of data shift they consider, the source of this data shift, and the locus of the shift within the modelling pipeline. We use our taxonomy to classify over 400 papers that test generalisation, for a total of more than 600 individual experiments. Considering the results of this review, we present an in-depth analysis that maps out the current state of generalisation research in NLP, and we make recommendations for which areas might deserve attention in the future. Along with this paper, we release a webpage where the results of our review can be dynamically explored, and which we intend to update as new NLP generalisation studies are published. With this work, we aim to take steps towards making state-of-the-art generalisation testing the new status quo in NLP.
Document type Preprint
Note Versions v2 (2022) and v3 (2023) also available on ArXiv.
Language English
Published at https://doi.org/10.48550/arXiv.2210.03050
Downloads
2210.03050v1 (Submitted manuscript)
2210.03050 (Other version)
Permalink to this page
Back