Even with great advances in machine vision, animals are still unmatched in their ability to visually search complex scenes. Animals from bees [1, 2] to birds [3] to humans [4-12] learn about the statistical relations in visual environments to guide and aid their search for targets. Here, we investigate a novel manner in which humans utilize rapidly acquired information about scenes by guiding search toward likely target sizes.
View Article and Find Full Text PDFJ Exp Psychol Hum Percept Perform
June 2017
Although the facilitation of visual search by contextual information is well established, there is little understanding of the independent contributions of different types of contextual cues in scenes. Here we manipulated 3 types of contextual information: object co-occurrence, multiple object configurations, and background category. We isolated the benefits of each contextual cue to target detectability, its impact on decision bias, confidence, and the guidance of eye movements.
View Article and Find Full Text PDFScene context is known to facilitate object recognition and guide visual search, but little work has focused on isolating image-based cues and evaluating their contributions to eye movement guidance and search performance. Here, we explore three types of contextual cues (a co-occurring object, the configuration of other objects, and the superordinate category of background elements) and assess their joint contributions to search performance in the framework of cue-combination and the temporal unfolding of their extraction. We also assess whether observers' ability to extract each contextual cue in the visual periphery is a bottleneck that determines the utilization and contribution of each cue to search guidance and decision accuracy.
View Article and Find Full Text PDFSaliency models have been frequently used to predict eye movements made during image viewing without a specified task (free viewing). Use of a single image set to systematically compare free viewing to other tasks has never been performed. We investigated the effect of task differences on the ability of three models of saliency to predict the performance of humans viewing a novel database of 800 natural images.
View Article and Find Full Text PDFHow faces change across lengthy time periods and whether the changing appearance of a face functions as an identity category was investigated in two experiments. In Experiment 1, the faces of 15 individuals were multidimensionally scaled at each of seven age epochs (roughly <6 months to 75 years of age) and correlated with the same persons, but at different ages. In Experiment 2, three individuals at each of seven age epochs were multidimensionally scaled, and analyses explored the conceptual structure and transformational path of each person within the space.
View Article and Find Full Text PDF