Evaluating Locally Run Large Language Models on Toxic Meme Analysis

Open Access
Authors
Publication date 2026
Host editors
  • Himanshu Verma
  • Alessandro Bozzon
  • Jie Yang
  • Andrea Mauri
Book title Web Engineering
Book subtitle 25th International Conference, ICWE 2025, Delft, The Netherlands, June 30–July 3, 2025 : proceedings
ISBN
  • 9783031972065
ISBN (electronic)
  • 9783031972072
Series Lecture Notes in Computer Science
Event 25th International Conference on Web Engineering, ICWE 2025
Pages (from-to) 128-135
Number of pages 8
Publisher Cham: Springer
Organisations
  • Interfacultary Research - Institute for Logic, Language and Computation (ILLC)
Abstract

Toxic memes easily spread online, propagating stereotypes, hate, and other stronger or more nuanced types of malicious content. The sheer volume of memes requiring moderation calls for automated methods, but their multiple layers of meaning make them challenging to assess: in some cases, toxicity may stem from subtle wordplay, in others by visual references or evoking hateful symbols, etc. Large language models (LLMs) offer a promising tool for performing toxicity detection, since they can leverage a large amount of contextual information and analyzing content items in depth. In this paper, we investigate the suitability of locally run LLMs to perform such a task. Locally run large language models have several advantages over web-based models like OpenAI’s ChatGPT with respect to costs, reproducibility, and data safety. We evaluate the local models on the tasks of automatic meme analysis and toxic symbol identification, and compare the results with analyses of the online model ChatGPT. Our findings reveal that while local models identify only a limited number of toxic memes and symbols, their labels are often accurate (low recall, high precision). Although they do not achieve perfect performance, we believe these models can effectively support human content moderators.

Document type Conference contribution
Language English
Published at https://doi.org/10.1007/978-3-031-97207-2_10
Other links https://www.scopus.com/pages/publications/105020017850
Downloads
978-3-031-97207-2_10 (Final published version)
Permalink to this page
Back