Fooled twice: People cannot detect deepfakes but think they can.

iScience

Amsterdam School of Economics, University of Amsterdam, 1001 NJ Amsterdam, The Netherlands.

Published: November 2021

Hyper-realistic manipulations of audio-visual content, i.e., deepfakes, present new challenges for establishing the veracity of online content. Research on the human impact of deepfakes remains sparse. In a pre-registered behavioral experiment ( = 210), we show that (1) people cannot reliably detect deepfakes and (2) neither raising awareness nor introducing financial incentives improves their detection accuracy. Zeroing in on the underlying cognitive processes, we find that (3) people are biased toward mistaking deepfakes as authentic videos (rather than vice versa) and (4) they overestimate their own detection abilities. Together, these results suggest that people adopt a "seeing-is-believing" heuristic for deepfake detection while being overconfident in their (low) detection abilities. The combination renders people particularly susceptible to be influenced by deepfake content.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8602050PMC
http://dx.doi.org/10.1016/j.isci.2021.103364DOI Listing

Publication Analysis

Top Keywords

detect deepfakes
8
detection abilities
8
deepfakes
5
fooled people
4
people detect
4
deepfakes hyper-realistic
4
hyper-realistic manipulations
4
manipulations audio-visual
4
audio-visual content
4
content deepfakes
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!