The advent of deepfake technology has raised significant concerns regarding its impact on individuals' cognitive processes and beliefs, considering the pervasive relationships between technology and human cognition. This study delves into the psychological literature surrounding deepfakes, focusing on people's public representation of this emerging technology and highlighting prevailing themes, opinions, and emotions. Under the media framing, the theoretical framework is crucial in shaping individuals' cognitive schemas regarding technology. A qualitative method has been applied to unveil patterns, correlations, and recurring themes of beliefs about the main topic, deepfake, discussed on the forum Quora. The final extracted text corpus consisted of 166 answers to 17 questions. Analysis results highlighted the 20 most prevalent critical lemmas, and deepfake was the main one. Moreover, co-occurrence analysis identified words frequently appearing with the lemma deepfake, including video, create, and artificial intelligence-finally, thematic analysis identified eight main themes within the deepfake corpus. Cognitive processes rely on critical thinking skills in detecting anomalies in fake videos or discerning between the negative and positive impacts of deepfakes from an ethical point of view. Moreover, people adapt their beliefs and mental schemas concerning the representation of technology. Future studies should explore the role of media literacy in helping individuals to identify deepfake content since people may not be familiar with the concept of deepfakes or may not fully understand the negative or positive implications. Increased awareness and understanding of technology can empower individuals to evaluate critically the media related to Artificial Intelligence.

Download full-text PDF

Source
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0313605PLOS

Publication Analysis

Top Keywords

deepfake technology
8
individuals' cognitive
8
cognitive processes
8
analysis identified
8
negative positive
8
deepfake
7
technology
7
public mental
4
mental representations
4
representations deepfake
4

Similar Publications

The advent of deepfake technology has raised significant concerns regarding its impact on individuals' cognitive processes and beliefs, considering the pervasive relationships between technology and human cognition. This study delves into the psychological literature surrounding deepfakes, focusing on people's public representation of this emerging technology and highlighting prevailing themes, opinions, and emotions. Under the media framing, the theoretical framework is crucial in shaping individuals' cognitive schemas regarding technology.

View Article and Find Full Text PDF

The harm caused by deepfake face images is increasing. To proactively defend against this threat, this paper innovatively proposes a destructive active defense algorithm for deepfake face images (DADFI). This algorithm adds slight perturbations to the original face images to generate adversarial samples.

View Article and Find Full Text PDF

With the advancement of deep forgery techniques, particularly propelled by generative adversarial networks (GANs), identifying deepfake faces has become increasingly challenging. Although existing forgery detection methods can identify tampering details within manipulated images, their effectiveness significantly diminishes in complex scenes, especially in low-quality images subjected to compression. To address this issue, we proposed a novel deep face forgery video detection model named Two-Stream Feature Domain Fusion Network (TSFF-Net).

View Article and Find Full Text PDF

Deepfake detection using deep feature stacking and meta-learning.

Heliyon

February 2024

Department of Computer Science and Engineering, Jadavpur University, Kolkata, 700032, India.

Deepfake is a type of face manipulation technique using deep learning that allows for the replacement of faces in videos in a very realistic way. While this technology has many practical uses, if used maliciously, it can have a significant number of bad impacts on society, such as spreading fake news or cyberbullying. Therefore, the ability to detect deepfake has become a pressing need.

View Article and Find Full Text PDF

CSTAN: A Deepfake Detection Network with CST Attention for Superior Generalization.

Sensors (Basel)

November 2024

Guangxi Key Laboratory of Image and Graphic Intelligent Processing, Guilin University of Electronic Technology, Guilin 541004, China.

With the advancement of deepfake forgery technology, highly realistic fake faces have posed serious security risks to sensor-based facial recognition systems. Recent deepfake detection models mainly use binary classification models based on deep learning. Despite achieving high detection accuracy on intra-datasets, these models lack generalization ability when applied to cross-datasets.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!