Trichromatic color vision is a fundamental aspect of the visual system shared by humans and non-human primates. In human observers, color has been shown to facilitate object identification. However, little is known about the role that color plays in higher level vision of non-human primates. Here, we addressed this question and studied the interaction between luminance- and color-based structural information for the recognition of natural scenes. We present psychophysical data showing that both monkey and human observers equally profited from color when recognizing natural scenes, and they were equally impaired when scenes were manipulated using colored noise. This effect was most prominent for degraded image conditions. By using a specific procedure for stimulus degradation, we found that the improvement as well as the impairment in visual memory performance is due to contribution of image color independent of luminance-based object information. Our results demonstrate that humans as well as non-human primates exploit their sensory ability of color vision to achieve higher performance in visual recognition tasks especially when shape features are degraded.

Download full-text PDF

Source
http://dx.doi.org/10.1167/9.5.14DOI Listing

Publication Analysis

Top Keywords

natural scenes
12
non-human primates
12
recognition natural
8
color vision
8
human observers
8
color
7
color shape
4
shape interactions
4
interactions recognition
4
scenes
4

Similar Publications

Retinotopic biases in contextual feedback signals to V1 for object and scene processing.

Curr Res Neurobiol

June 2025

Centre for Cognitive Neuroimaging, School of Psychology and Neuroscience, College of Medical, Veterinary and Life Sciences, University of Glasgow, 62 Hillhead Street, Glasgow, G12 8QB, United Kingdom.

Identifying the objects embedded in natural scenes relies on recurrent processing between lower and higher visual areas. How is cortical feedback information related to objects and scenes organised in lower visual areas? The spatial organisation of cortical feedback converging in early visual cortex during object and scene processing could be retinotopically specific as it is coded in V1, or object centred as coded in higher areas, or both. Here, we characterise object and scene-related feedback information to V1.

View Article and Find Full Text PDF

Predicting image memorability from evoked feelings.

Behav Res Methods

January 2025

Department of Psychology, Columbia University, New York, NY, USA.

While viewing a visual stimulus, we often cannot tell whether it is inherently memorable or forgettable. However, the memorability of a stimulus can be quantified and partially predicted by a collection of conceptual and perceptual factors. Higher-level properties that represent the "meaningfulness" of a visual stimulus to viewers best predict whether it will be remembered or forgotten across a population.

View Article and Find Full Text PDF

We examined the intricate mechanisms underlying visual processing of complex motion stimuli by measuring the detection sensitivity to contraction and expansion patterns and the discrimination sensitivity to the location of the center of motion (CoM) in various real and unreal optic flow stimuli. We conducted two experiments (N = 20 each) and compared responses to both "real" optic flow stimuli containing information about self-movement in a three-dimensional scene and "unreal" optic flow stimuli lacking such information. We found that detection sensitivity to contraction surpassed that to expansion patterns for unreal optic flow stimuli, whereas this trend was reversed for real optic flow stimuli.

View Article and Find Full Text PDF

African mole-rats (Bathyergidae, Rodentia) are subterranean rodents that live in extensive dark underground tunnel systems and rarely emerge aboveground. They can discriminate between light and dark but show no overt visually driven behaviours except for light-avoidance responses. Their eyes and central visual system are strongly reduced but not degenerated.

View Article and Find Full Text PDF

Drones are extensively utilized in both military and social development processes. Eliminating the reliance of drone positioning systems on GNSS and enhancing the accuracy of the positioning systems is of significant research value. This paper presents a novel approach that employs a real-scene 3D model and image point cloud reconstruction technology for the autonomous positioning of drones and attains high positioning accuracy.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!