Undesirable Biases in NLP Addressing Challenges of Measurement

Open Access
Authors
Publication date 01-2024
Journal Journal of Artificial Intelligence Research
Volume | Issue number 79
Pages (from-to) 1-40
Number of pages 40
Organisations
  • Interfacultary Research - Institute for Logic, Language and Computation (ILLC)
Abstract

As Large Language Models and Natural Language Processing (NLP) technology rapidly develop and spread into daily life, it becomes crucial to anticipate how their use could harm people. One problem that has received a lot of attention in recent years is that this technology has displayed harmful biases, from generating derogatory stereotypes to producing disparate outcomes for different social groups. Although a lot of effort has been invested in assessing and mitigating these biases, our methods of measuring the biases of NLP models have serious problems and it is often unclear what they actually measure. In this paper, we provide an interdisciplinary approach to discussing the issue of NLP model bias by adopting the lens of psychometrics — a field specialized in the measurement of concepts like bias that are not directly observable. In particular, we will explore two central notions from psychometrics, the construct validity and the reliability of measurement tools, and discuss how they can be applied in the context of measuring model bias. Our goal is to provide NLP practitioners with methodological tools for designing better bias measures, and to inspire them more generally to explore tools from psychometrics when working on bias measurement tools.

Document type Article
Language English
Published at https://doi.org/10.1613/jair.1.15195
Other links https://www.scopus.com/pages/publications/85183902697
Downloads
15195wPg#s (Final published version)
Permalink to this page
Back