Predict the Next Word: <Humans exhibit uncertainty in this task and language models _____>

Open Access
Authors
Publication date 2024
Host editors
  • Y. Graham
  • M. Purver
Book title The 18th Conference of the European Chapter of the Association for Computational Linguistics
Book subtitle proceedings of the conference : EACL 2024 : March 17-22, 2024
ISBN (electronic)
  • 9798891760899
Event 18th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2024
Volume | Issue number 2
Pages (from-to) 234-255
Publisher Kerrville, TX: Association for Computational Linguistics
Organisations
  • Faculty of Science (FNWI)
  • Interfacultary Research - Institute for Logic, Language and Computation (ILLC)
Abstract
Language models (LMs) are statistical models trained to assign probability to human-generated text. As such, it is reasonable to question whether they approximate linguistic variability exhibited by humans well. This form of statistical assessment is difficult to perform at the passage level, for it requires acceptability judgments (i.e., human evaluation) or a robust automated proxy (which is non-trivial). At the word level, however, given some context, samples from an LM can be assessed via exact matching against a prerecorded dataset of alternative single-word continuations of the available context. We exploit this fact and evaluate the LM’s ability to reproduce variability that humans (in particular, a population of English speakers) exhibit in the ‘next word prediction’ task. This can be seen as assessing a form of calibration, which, in the context of text classification, Baan et al. (2022) termed calibration to human uncertainty. We assess GPT2, BLOOM and ChatGPT and find that they exhibit fairly low calibration to human uncertainty. We also verify the failure of expected calibration error (ECE) to reflect this, and as such, advise the community against relying on it in this setting.
Document type Conference contribution
Note With supplementary video
Language English
Published at https://doi.org/10.18653/v1/2024.eacl-short.22
Downloads
2024.eacl-short.22v1 (Accepted author manuscript)
2024.eacl-short.22v2 (Final published version)
Supplementary materials
Permalink to this page
Back