Simulating Social Media Using Large Language Models to Evaluate Alternative News Feed Algorithms

Open Access
Authors
Publication date 11-10-2023
Edition v1
Number of pages 11
Publisher ArXiv
Organisations
  • Faculty of Social and Behavioural Sciences (FMG) - Amsterdam Institute for Social Science Research (AISSR)
  • Interfacultary Research - Institute for Logic, Language and Computation (ILLC)
Abstract
Social media is often criticized for amplifying toxic discourse and discouraging constructive conversations. But designing social media platforms to promote better conversations is inherently challenging. This paper asks whether simulating social media through a combination of Large Language Models (LLM) and Agent-Based Modeling can help researchers study how different news feed algorithms shape the quality of online conversations. We create realistic personas using data from the American National Election Study to populate simulated social media platforms. Next, we prompt the agents to read and share news articles—and like or comment upon each other’s messages—within three platforms that use different news feed algorithms. In the first platform, users see the most liked and commented posts from users whom they follow. In the second, they see posts from all users—even those outside their own network. The third platform employs a novel “bridging” algorithm that highlights posts that are liked by people with opposing political views. We find this bridging algorithm promotes more constructive, non-toxic conversation across political divides than the other two models. Though further research is needed to evaluate these findings, we argue that LLMs hold considerable potential to improve simulation research on social media and many other complex social settings.
Document type Preprint
Language English
Published at https://doi.org/10.48550/arXiv.2310.05984
Downloads
2310.05984v1 (Final published version)
Permalink to this page
Back