Hyper-realistic face masks have been used as disguises in at least one border crossing and in numerous criminal cases. Experimental tests using these masks have shown that viewers accept them as real faces under a range of conditions. Here, we tested mask detection in a live identity verification task. Fifty-four visitors at the London Science Museum viewed a mask wearer at close range (2 m) as part of a mock passport check. They then answered a series of questions designed to assess mask detection, while the masked traveller was still in view. In the identity matching task, 8% of viewers accepted the mask as matching a real photo of someone else, and 82% accepted the match between masked person and masked photo. When asked if there was any reason to detain the traveller, only 13% of viewers mentioned a mask. A further 11% picked disguise from a list of suggested reasons. Even after reading about mask-related fraud, 10% of viewers judged that the traveller was not wearing a mask. Overall, mask detection was poor and was not predicted by unfamiliar face matching performance. We conclude that hyper-realistic face masks could go undetected during live identity checks.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7583446 | PMC |
http://dx.doi.org/10.1177/0301006620904614 | DOI Listing |
Comput Intell Neurosci
October 2022
Department of Computer Science and Engineering, Universitat de Lleida, Lleida, Spain.
Recent technological advancements in Artificial Intelligence make it easy to create deepfakes and hyper-realistic videos, in which images and video clips are processed to create fake videos that appear authentic. Many of them are based on swapping faces without the consent of the person whose appearance and voice are used. As emotions are inherent in human communication, studying how deepfakes transfer emotional expressions from original to fakes is relevant.
View Article and Find Full Text PDFFront Psychol
September 2022
Booth School of Business, The University of Chicago, Chicago, IL, United States.
Research in person and face perception has broadly focused on group-level consensus that individuals hold when making judgments of others (e.g., "X type of face looks trustworthy").
View Article and Find Full Text PDFSensors (Basel)
June 2022
Department of Information and Communication Engineering, Yeungnam University, Gyeongsan 38541, Korea.
Deep learning is used to address a wide range of challenging issues including large data analysis, image processing, object detection, and autonomous control. In the same way, deep learning techniques are also used to develop software and techniques that pose a danger to privacy, democracy, and national security. Fake content in the form of images and videos using digital manipulation with artificial intelligence (AI) approaches has become widespread during the past few years.
View Article and Find Full Text PDFPeerJ Comput Sci
September 2021
Mathematics Department, Al-Azhar University, Cairo, Nasr City, Egypt.
Recently, the deepfake techniques for swapping faces have been spreading, allowing easy creation of hyper-realistic fake videos. Detecting the authenticity of a video has become increasingly critical because of the potential negative impact on the world. Here, a new project is introduced; You Only Look Once Convolution Recurrent Neural Networks (YOLO-CRNNs), to detect deepfake videos.
View Article and Find Full Text PDFPerception
March 2020
Department of Psychology, University of York, UK.
Hyper-realistic face masks have been used as disguises in at least one border crossing and in numerous criminal cases. Experimental tests using these masks have shown that viewers accept them as real faces under a range of conditions. Here, we tested mask detection in a live identity verification task.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!