No easy fix to countering AI-generated visual disinformation: The (in)effectiveness of AI-labels, fact-check labels and community notes

Open Access
Authors
Publication date 17-07-2025
Publisher OSF Preprints
Organisations
  • Faculty of Social and Behavioural Sciences (FMG) - Amsterdam School of Communication Research (ASCoR)
Abstract
As generative AI makes it easier to create synthetic visuals, AI-driven visual disinformation is becoming more common on social media. However, while much research highlights its potential harm, less is known about how to reduce its potential to mislead. In this study, we therefore conducted a preregistered online experiment in the Netherlands (N=1,018) to test the effectiveness of various platform interventions: (1) AI labels or “watermarks,” (2) fact-check labels, and (3) community notes. We tested howeffective these sources are in lowering credibility of the false visual and belief in the false claim it portrays across two polarizing topics: climate change and immigration. Overall, the interventions showed no significant differences in effectiveness. This was the case when pooling both topics together and for climate-change related disinformation in isolation. However, for visual disinformation about immigration, community notes were most effective, especially among participants with strong anti-migrant views. Our findings suggest that while labeling has limited impact overall, its effectiveness varies by context, and no one-size-fits-all solution exists for combating AI-generated visual disinformation.
Document type Preprint
Language English
Published at https://doi.org/10.31219/osf.io/8237p_v1
Other links https://osf.io/xktzh
Downloads
Permalink to this page
Back