Under most circumstances, we can rely visual information to quickly and accurately discriminate "real" objects (e.g., fresh fruit) from "fake" objects (e.g., plastic fruit). It is unclear, however, whether this distinction is made early along the ventral visual stream when basic object features such as colour (e.g., primary visual cortex; V1) and texture (e.g., collateral sulcus; COS) are being processed, or whether information regarding object authenticity is extracted in later visual or memory regions (e.g., perirhinal cortex, lateral occipital cortex). To examine this question, participants were placed in an fMRI scanner, and presented with 300 objects photographed in colour or greyscale. Half of the objects were fake, and the other half were real. The participant's task was to categorise each image as presenting either a real or fake object. Broadly, our analyses revealed significant activation in CoS when participants categorised real objects, particularly when they were presented in colour. We also observed activation in V1 for coloured objects, particularly real ones. These results suggest that our seemingly intuitive ability to rapidly discriminate real from fake objects occurs at the early stages of visual processing, such as when the brain is extracting surface-feature information like texture (CoS) or colour (V1). Future studies could consider the time course of these neural events and probe the importance of cross-modal (e.g., audition and haptic) information underpinning feature extraction for distinguishing real from fake objects.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1007/s00221-024-06989-3 | DOI Listing |
Neural Netw
March 2025
Chongqing University of Posts and Telecommunications, Chongqing 400065, PR China. Electronic address:
Face forgery detection aims to distinguish AI generated fake faces with real faces. With the rapid development of face forgery creation algorithms, a large number of generative models have been proposed, which gradually reduce the local distortion phenomenon or the specific frequency traces in these models. At the same time, in the process of face data compression and transmission, distortion phenomenon and specific frequency cues could be eliminated, which brings severe challenges to the performance and generalization ability of face forgery detection.
View Article and Find Full Text PDFExp Brain Res
March 2025
School of Psychology and Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland.
Under most circumstances, we can rely visual information to quickly and accurately discriminate "real" objects (e.g., fresh fruit) from "fake" objects (e.
View Article and Find Full Text PDFSci Rep
March 2025
Faculty of Computers, Misr International University, Cairo, Egypt.
Audio forensics plays a major role in the investigation and analysis of audio recordings for legal and security purposes. The advent of audio fake attacks using speech combined with scene-manipulated audio represents a sophisticated challenge in fake audio detection. Fake audio detection, a critical technology in modern digital security, addresses the growing threat of manipulated audio content across various applications, including media, legal evidence, and cybersecurity.
View Article and Find Full Text PDFPLoS One
March 2025
BIOGECO, INRAE, University Bordeaux, Cestas, France.
Sampling methods that are both scientifically rigorous and ethical are cornerstones of any experimental biological research. Since its introduction 30 years ago, the method of using plasticine prey to quantify predation pressure has become increasingly popular in biology. However, recent studies have questioned the accuracy of the method, suggesting that misinterpretation of predator bite marks and the artificiality of the models may bias the results.
View Article and Find Full Text PDFAnnu Int Conf IEEE Eng Med Biol Soc
July 2024
Deepfake technology can create highly realistic fabricated videos, presenting serious ethical concerns and threats of misinformation. Reliably distinguishing deepfakes from genuine videos is therefore critical yet challenging. This study explored electroencephalography (EEG)-based deepfake detection by analyzing EEG responses in 10 participants viewing 100 videos (50 real, 50 deepfakes).
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!