A fundamental difficulty for image- or appearance-based models of face recognition is to distinguish variations in image structure between two different individuals from those that can occur for a given individual due to changes in lighting, facial expression, or pose. The research described in the present article was designed to examine how human observers are able to cope with this problem. In two experiments, observers performed either a match-to-sample task (Experiment 1) or same-different identity judgments (Experiment 2) for photographs of unfamiliar individuals. A key aspect of these studies is that the matching or same stimulus pairs were never identical; that is to say, they always differed in terms of facial expression or the pattern of illumination. In order to provide a quantitative assessment of appearance-based models, we also measured the optical differences for each pair of same or different images using a variety of possible distance metrics based on the pattern of pixel intensities or wavelet decompositions. These difference measures were then correlated with the accuracy of observers' judgments for each individual stimulus pair. The results clearly show that human observers can readily distinguish relevant from irrelevant image changes in comparisons of facial identity, but that this performance cannot be explained by any of the appearance-based models we tested.

Download full-text PDF

Source
http://dx.doi.org/10.1167/8.15.5DOI Listing

Publication Analysis

Top Keywords

human observers
12
appearance-based models
12
facial expression
8
low level
4
level image
4
image differences
4
differences account
4
account ability
4
ability human
4
observers
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!