The rapid advancement of 'deepfake' video technology-which uses deep learning artificial intelligence algorithms to create fake videos that look real-has given urgency to the question of how policymakers and technology companies should moderate inauthentic content. We conduct an experiment to measure people's alertness to and ability to detect a high-quality deepfake among a set of videos. First, we find that in a natural setting with no content warnings, individuals who are exposed to a deepfake video of neutral content are no more likely to detect anything out of the ordinary (32.
View Article and Find Full Text PDF