What do adversarial images tell us about human vision?

Elife

School of Psychological Science, University of Bristol, Bristol, United Kingdom.

Published: September 2020

Deep convolutional neural networks (DCNNs) are frequently described as the best current models of human and primate vision. An obvious challenge to this claim is the existence of that fool DCNNs but are uninterpretable to humans. However, recent research has suggested that there may be similarities in how humans and DCNNs interpret these seemingly nonsense images. We reanalysed data from a high-profile paper and conducted five experiments controlling for different ways in which these images can be generated and selected. We show human-DCNN agreement is much weaker and more variable than previously reported, and that the weak agreement is contingent on the choice of adversarial images and the design of the experiment. Indeed, we find there are well-known methods of generating images for which humans show no agreement with DCNNs. We conclude that adversarial images still pose a challenge to theorists using DCNNs as models of human vision.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7467732PMC
http://dx.doi.org/10.7554/eLife.55978DOI Listing

Publication Analysis

Top Keywords

adversarial images
12
models human
8
dcnns
5
images
5
images human
4
human vision?
4
vision? deep
4
deep convolutional
4
convolutional neural
4
neural networks
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!