Deepfakes are a form of multi-modal media generated using deep-learning technology. Many academics have expressed fears that deepfakes present a severe threat to the veracity of news and political communication, and an epistemic crisis for video evidence. These commentaries have often been hypothetical, with few real-world cases of deepfake's political and epistemological harm. The Russo-Ukrainian war presents the first real-life example of deepfakes being used in warfare, with a number of incidents involving deepfakes of Russian and Ukrainian government officials being used for misinformation and entertainment. This study uses a thematic analysis on tweets relating to deepfakes and the Russo-Ukrainian war to explore how people react to deepfake content online, and to uncover evidence of previously theorised harms of deepfakes on trust. We extracted 4869 relevant tweets using the Twitter API over the first seven months of 2022. We found that much of the misinformation in our dataset came from labelling real media as deepfakes. Novel findings about deepfake scepticism emerged, including a connection between deepfakes and conspiratorial beliefs that world leaders were dead and/or replaced by deepfakes. This research has numerous implications for future research, societal media platforms, news media and governments. The lack of deepfake literacy in our dataset led to significant misunderstandings of what constitutes a deepfake, showing the need to encourage literacy in these new forms of media. However, our evidence demonstrates that efforts to raise awareness around deepfakes may undermine trust in legitimate videos. Consequentially, news media and governmental agencies need to weigh the benefits of educational deepfakes and pre-bunking against the risks of undermining truth. Similarly, news companies and media should be careful in how they label suspected deepfakes in case they cause suspicion for real media.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10599512PMC
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0291668PLOS

Publication Analysis

Top Keywords

deepfakes
13
thematic analysis
8
analysis tweets
8
deepfakes russian
8
media
8
russo-ukrainian war
8
real media
8
news media
8
deepfake
5
deepfake videos
4

Similar Publications

The advent of deepfake technology has raised significant concerns regarding its impact on individuals' cognitive processes and beliefs, considering the pervasive relationships between technology and human cognition. This study delves into the psychological literature surrounding deepfakes, focusing on people's public representation of this emerging technology and highlighting prevailing themes, opinions, and emotions. Under the media framing, the theoretical framework is crucial in shaping individuals' cognitive schemas regarding technology.

View Article and Find Full Text PDF

The proliferation of deepfake generation has become increasingly widespread. Current solutions for automatically detecting and classifying generated content require substantial computational resources, making them impractical for use by the average non-expert individual, particularly from edge computing applications. In this paper, we propose a series of techniques to accelerate the inference speed of deepfake detection on video data.

View Article and Find Full Text PDF

Introduction: The rapid escalation of cyber threats necessitates innovative strategies to enhance cybersecurity and privacy measures. Artificial Intelligence (AI) has emerged as a promising tool poised to enhance the effectiveness of cybersecurity strategies by offering advanced capabilities for intrusion detection, malware classification, and privacy preservation. However, this work addresses the significant lack of a comprehensive synthesis of AI's use in cybersecurity and privacy across the vast literature, aiming to identify existing gaps and guide further progress.

View Article and Find Full Text PDF

With further development of generative AI, primarily generative-adversarial networks (GAN), deepfakes are gaining in quality and accessibility. While, forensic methods designed for examination of handwriting are often applied to its digital copies, despite being possibly insensitive to cases of GAN-made forgeries (unless methods of digital forensics are co-employed). Approaching this problem from a novel perspective, we have created a translational GAN tasked with generating false handwritten signatures from limited examples, aiming to ascertain whether traditional methods of signature examination will be effective against such forgeries.

View Article and Find Full Text PDF

In this short paper, I respond to Keith Raymond Harris' paper "Synthetic Media, The Wheel, and the Burden of Proof". In particular, I examine his arguments against two prominent approaches employed to deal with synthetic media such as deepfakes and other GenAI content, namely, the "reactive" and "proactive" approaches. In the first part, I raise a worry about the problem Harris levels at the reactive approach, before providing a constructive way of expanding his worry regarding the proactive approach.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!