Warning: Humans cannot reliably detect speech deepfakes.

PLoS One

Department of Computer Science, University College London, London, United Kingdom.

Published: August 2023

Speech deepfakes are artificial voices generated by machine learning models. Previous literature has highlighted deepfakes as one of the biggest security threats arising from progress in artificial intelligence due to their potential for misuse. However, studies investigating human detection capabilities are limited. We presented genuine and deepfake audio to n = 529 individuals and asked them to identify the deepfakes. We ran our experiments in English and Mandarin to understand if language affects detection performance and decision-making rationale. We found that detection capability is unreliable. Listeners only correctly spotted the deepfakes 73% of the time, and there was no difference in detectability between the two languages. Increasing listener awareness by providing examples of speech deepfakes only improves results slightly. As speech synthesis algorithms improve and become more realistic, we can expect the detection task to become harder. The difficulty of detecting speech deepfakes confirms their potential for misuse and signals that defenses against this threat are needed.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10395974PMC
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0285333PLOS

Publication Analysis

Top Keywords

speech deepfakes
16
potential misuse
8
deepfakes
7
speech
5
warning humans
4
humans reliably
4
reliably detect
4
detect speech
4
deepfakes speech
4
deepfakes artificial
4

Similar Publications

Artificially generated content threatens to seriously disrupt the public sphere. Generative AI massively facilitates the production of convincing portrayals of fabricated events. We have already begun to witness the spread of synthetic misinformation, political propaganda, and non-consensual intimate deepfakes.

View Article and Find Full Text PDF
Article Synopsis
  • Recent advancements in technology have made it increasingly difficult for people to distinguish between real political speeches and deepfake videos due to hyper-realistic audio and visual effects.
  • A study involving 2,215 participants found that factors like misinformation rates and question framing did not significantly impact people’s ability to identify authenticity.
  • The research revealed that deepfake videos with advanced text-to-speech audio were harder to detect than those with voice actors, and overall, people relied more on audio and visual cues rather than the content of the speech to discern real from fake.
View Article and Find Full Text PDF

Background: The digital era has witnessed an escalating dependence on digital platforms for news and information, coupled with the advent of "deepfake" technology. Deepfakes, leveraging deep learning models on extensive data sets of voice recordings and images, pose substantial threats to media authenticity, potentially leading to unethical misuse such as impersonation and the dissemination of false information.

Objective: To counteract this challenge, this study aims to introduce the concept of innate biological processes to discern between authentic human voices and cloned voices.

View Article and Find Full Text PDF

Deepfakes are viral ingredients of digital environments, and they can trick human cognition into misperceiving the fake as real. Here, we test the neurocognitive sensitivity of 25 participants to accept or reject person identities as recreated in audio deepfakes. We generate high-quality voice identity clones from natural speakers by using advanced deepfake technologies.

View Article and Find Full Text PDF

Speech deepfakes are artificial voices generated by machine learning models. Previous literature has highlighted deepfakes as one of the biggest security threats arising from progress in artificial intelligence due to their potential for misuse. However, studies investigating human detection capabilities are limited.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!