How Are LLMs Mitigating Stereotyping Harms? Learning from Search Engine Studies

Open Access
Authors
Publication date 2024
Host editors
  • S. Das
  • B.P. Green
  • K. Varshney
  • M. Ganapini
  • A. Renda
Book title Proceedings of the Seventh AAAI/ACM Conference on AI, Ethics, and Society
Book subtitle AIES-24
ISBN
  • 9781577358923
Event 7th AAAI/ACM Conference on AI, Ethics, and Society
Pages (from-to) 839-854
Publisher Washington, DC: AAAI Press
Organisations
  • Faculty of Humanities (FGw) - Amsterdam Institute for Humanities Research (AIHR) - Amsterdam School for Cultural Analysis (ASCA)
  • Interfacultary Research - Institute for Logic, Language and Computation (ILLC)
Abstract
With the widespread availability of LLMs since the release of ChatGPT and increased public scrutiny, commercial model development appears to have focused their efforts on `safety' training concerning legal liabilities at the expense of social impact evaluation. This mimics a similar trend which we could observe for search engine autocompletion some years prior. We draw on scholarship from NLP and search engine auditing and present a novel evaluation task in the style of autocompletion prompts to assess stereotyping in LLMs. We assess LLMs by using four metrics, namely refusal rates, toxicity, sentiment and regard, with and without safety system prompts. Our findings indicate an improvement to stereotyping outputs with the system prompt, but overall a lack of attention by LLMs under study to certain harms classified as toxic, particularly for prompts about peoples/ethnicities and sexual orientation. Mentions of intersectional identities trigger a disproportionate amount of stereotyping. Finally, we discuss the implications of these findings about stereotyping harms in light of the coming intermingling of LLMs and search and the choice of stereotyping mitigation policy to adopt. We address model builders, academics, NLP practitioners and policy makers, calling for accountability and awareness concerning stereotyping harms, be it for training data curation, leader board design and usage, or social impact measurement.
Document type Conference contribution
Language English
Published at https://doi.org/10.1609/aies.v7i1.31684
Downloads
31684-Article Text-35748-1-2-20241016 (Final published version)
Permalink to this page
Back