The mammalian retina engages a broad array of linear and nonlinear circuit mechanisms to convert natural scenes into retinal ganglion cell (RGC) spike outputs. Although many individual integration mechanisms are well understood, we know less about how multiple mechanisms interact to encode the complex spatial features present in natural inputs. Here, we identified key spatial features in natural scenes that shape encoding by primate parasol RGCs. Our approach identified simplifications in the spatial structure of natural scenes that minimally altered RGC spike responses. We observed that reducing natural movies into 16 linearly integrated regions described ∼80% of the structure of parasol RGC spike responses; this performance depended on the number of regions but not their precise spatial locations. We used simplified stimuli to design high-dimensional metamers that recapitulated responses to naturalistic movies. Finally, we modeled the retinal computations that convert flashed natural images into one-dimensional spike counts.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC9674269 | PMC |
http://dx.doi.org/10.1073/pnas.2121744119 | DOI Listing |
Curr Res Neurobiol
June 2025
Centre for Cognitive Neuroimaging, School of Psychology and Neuroscience, College of Medical, Veterinary and Life Sciences, University of Glasgow, 62 Hillhead Street, Glasgow, G12 8QB, United Kingdom.
Identifying the objects embedded in natural scenes relies on recurrent processing between lower and higher visual areas. How is cortical feedback information related to objects and scenes organised in lower visual areas? The spatial organisation of cortical feedback converging in early visual cortex during object and scene processing could be retinotopically specific as it is coded in V1, or object centred as coded in higher areas, or both. Here, we characterise object and scene-related feedback information to V1.
View Article and Find Full Text PDFBehav Res Methods
January 2025
Department of Psychology, Columbia University, New York, NY, USA.
While viewing a visual stimulus, we often cannot tell whether it is inherently memorable or forgettable. However, the memorability of a stimulus can be quantified and partially predicted by a collection of conceptual and perceptual factors. Higher-level properties that represent the "meaningfulness" of a visual stimulus to viewers best predict whether it will be remembered or forgotten across a population.
View Article and Find Full Text PDFPsychon Bull Rev
January 2025
NYU-ECNU Institute of Brain and Cognitive Science, New York University Shanghai, Shanghai, China.
We examined the intricate mechanisms underlying visual processing of complex motion stimuli by measuring the detection sensitivity to contraction and expansion patterns and the discrimination sensitivity to the location of the center of motion (CoM) in various real and unreal optic flow stimuli. We conducted two experiments (N = 20 each) and compared responses to both "real" optic flow stimuli containing information about self-movement in a three-dimensional scene and "unreal" optic flow stimuli lacking such information. We found that detection sensitivity to contraction surpassed that to expansion patterns for unreal optic flow stimuli, whereas this trend was reversed for real optic flow stimuli.
View Article and Find Full Text PDFProc Biol Sci
January 2025
Department of Zoology, Faculty of Science, Charles University, Prague 128 43, Czech Republic.
African mole-rats (Bathyergidae, Rodentia) are subterranean rodents that live in extensive dark underground tunnel systems and rarely emerge aboveground. They can discriminate between light and dark but show no overt visually driven behaviours except for light-avoidance responses. Their eyes and central visual system are strongly reduced but not degenerated.
View Article and Find Full Text PDFSensors (Basel)
January 2025
College of Electronics and Information Engineering, South-Central Minzu University, Wuhan 430074, China.
Drones are extensively utilized in both military and social development processes. Eliminating the reliance of drone positioning systems on GNSS and enhancing the accuracy of the positioning systems is of significant research value. This paper presents a novel approach that employs a real-scene 3D model and image point cloud reconstruction technology for the autonomous positioning of drones and attains high positioning accuracy.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!