Large Language Models Outperform Expert Coders and Supervised Classifiers at Annotating Political Social Media Messages

Open Access
Authors
Publication date 12-2025
Journal Social Science Computer Review
Volume | Issue number 43 | 6
Pages (from-to) 1181-1195
Organisations
  • Interfacultary Research - Institute for Logic, Language and Computation (ILLC)
Abstract

Instruction-tuned Large Language Models (LLMs) have recently emerged as a powerful new tool for text analysis. As these models are capable of zero-shot annotation based on instructions written in natural language, they obviate the need of large sets of training data—and thus bring potential paradigm-shifting implications for using text as data. While the models show substantial promise, their relative performance compared to human coders and supervised models remains poorly understood and subject to significant academic debate. This paper assesses the strengths and weaknesses of popular fine-tuned AI models compared to both conventional supervised classifiers and manual annotation by experts and crowd workers. The task used is to identify the political affiliation of politicians based on a single X/Twitter message, focusing on data from 11 different countries. The paper finds that GPT-4 achieves higher accuracy than both supervised models and human coders across all languages and country contexts. In the US context, it achieves an accuracy of 0.934 and an inter-coder reliability of 0.982. Examining the cases where the models fail, the paper finds that the LLM—unlike the supervised models—correctly annotates messages that require interpretation of implicit or unspoken references, or reasoning on the basis of contextual knowledge—capacities that have traditionally been understood to be distinctly human. The paper thus contributes to our understanding of the revolutionary implications of LLMs for text analysis within the social sciences.

Document type Article
Language English
Published at https://doi.org/10.1177/08944393241286471
Other links https://www.scopus.com/pages/publications/85205299947
Downloads
Permalink to this page
Back