They would never say anything like this! Reasons to doubt political deepfakes

Open Access
Authors
Publication date 02-2024
Journal European Journal of Communication
Volume | Issue number 39 | 1
Pages (from-to) 56-70
Organisations
  • Faculty of Social and Behavioural Sciences (FMG) - Amsterdam School of Communication Research (ASCoR)
Abstract
Although deepfakes are conventionally regarded as dangerous, we know little about how deepfakes are perceived, and which potential motivations drive doubt in the believability of deepfakes versus authentic videos. To better understand the audience's perceptions of deepfakes, we ran an online experiment (N = 829) in which participants were randomly exposed to a politician's textual or audio-visual authentic speech or a textual or audio-visual manipulation (a deepfake) where this politician's speech was forged to include a radical right-wing populist narrative. In response to both textual disinformation and deepfakes, we inductively assessed (1) the perceived motivations for expressed doubt and uncertainty in response to disinformation and (2) the accuracy of such judgments. Key findings show that participants have a hard time distinguishing a deepfake from a related authentic video, and that the deepfake's content distance from reality is a more likely cause for doubt than perceived technological glitches. Together, we offer new insights into news users’ abilities to distinguish deepfakes from authentic news, which may inform (targeted) media literacy interventions promoting accurate verification skills among the audience.
Document type Article
Language English
Published at https://doi.org/10.1177/02673231231184703
Downloads
They would never say anything like this! (Final published version)
Supplementary materials
Permalink to this page
Back