The human brain's role in face processing (FP) and decision making for social interactions depends on recognizing faces accurately. However, the prevalence of deepfakes, AI-generated images, poses challenges in discerning real from synthetic identities. This study investigated healthy individuals' cognitive and emotional engagement in a visual discrimination task involving real and deepfake human faces expressing positive, negative, or neutral emotions. Electroencephalographic (EEG) data were collected from 23 healthy participants using a 21-channel dry-EEG headset; power spectrum and event-related potential (ERP) analyses were performed. Results revealed statistically significant activations in specific brain areas depending on the authenticity and emotional content of the stimuli. Power spectrum analysis highlighted a right-hemisphere predominance in theta, alpha, high-beta, and gamma bands for real faces, while deepfakes mainly affected the frontal and occipital areas in the delta band. ERP analysis hinted at the possibility of discriminating between real and synthetic faces, as N250 (200-300 ms after stimulus onset) peak latency decreased when observing real faces in the right frontal (LF) and left temporo-occipital (LTO) areas, but also within emotions, as P100 (90-140 ms) peak amplitude was found higher in the right temporo-occipital (RTO) area for happy faces with respect to neutral and sad ones.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10526392PMC
http://dx.doi.org/10.3390/brainsci13091233DOI Listing

Publication Analysis

Top Keywords

real deepfake
8
real synthetic
8
power spectrum
8
real faces
8
real
6
faces
6
deepfake face
4
face recognition
4
recognition eeg
4
eeg study
4

Similar Publications

Deepfake detection using deep feature stacking and meta-learning.

Heliyon

February 2024

Department of Computer Science and Engineering, Jadavpur University, Kolkata, 700032, India.

Deepfake is a type of face manipulation technique using deep learning that allows for the replacement of faces in videos in a very realistic way. While this technology has many practical uses, if used maliciously, it can have a significant number of bad impacts on society, such as spreading fake news or cyberbullying. Therefore, the ability to detect deepfake has become a pressing need.

View Article and Find Full Text PDF

CSTAN: A Deepfake Detection Network with CST Attention for Superior Generalization.

Sensors (Basel)

November 2024

Guangxi Key Laboratory of Image and Graphic Intelligent Processing, Guilin University of Electronic Technology, Guilin 541004, China.

With the advancement of deepfake forgery technology, highly realistic fake faces have posed serious security risks to sensor-based facial recognition systems. Recent deepfake detection models mainly use binary classification models based on deep learning. Despite achieving high detection accuracy on intra-datasets, these models lack generalization ability when applied to cross-datasets.

View Article and Find Full Text PDF

The proliferation of multimedia-based deepfake content in recent years has posed significant challenges to information security and authenticity, necessitating the use of methods beyond dependable dynamic detection. In this paper, we utilize the powerful combination of Deep Generative Adversarial Networks (GANs) and Transfer Learning (TL) to introduce a new technique for identifying deepfakes in multimedia systems. Each of the GAN architectures may be customized to detect subtle changes in different multimedia formats by combining their advantages.

View Article and Find Full Text PDF

The continuous advancement of face forgery techniques has caused a series of trust crises, posing a significant menace to information security and personal privacy. In response, deep learning is being employed to develop effective detection methods to identify deepfake images and videos. Currently, most detection methods generally achieve satisfactory performance in intra-domain detection.

View Article and Find Full Text PDF
Article Synopsis
  • Synthetic images called deepfakes are made using computer graphics and AI, but they can spread fake information and break social media rules.
  • A new and better type of computer model called GAN helps tell the difference between real and fake images by using advanced techniques to improve accuracy.
  • The study tested this model on specific datasets and got amazing results, showing it can successfully detect fake images while also creating them safely.
View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!