Occipital cortices of different sighted people contain analogous maps of visual information (e.g. foveal vs. peripheral). In congenital blindness, "visual" cortices respond to nonvisual stimuli. Do visual cortices of different blind people represent common informational maps? We leverage naturalistic stimuli and inter-subject pattern similarity analysis to address this question. Blindfolded sighted (n = 22) and congenitally blind (n = 22) participants listened to 6 sound clips (5-7 min each): 3 auditory excerpts from movies; a naturalistic spoken narrative; and matched degraded auditory stimuli (Backwards Speech, scrambled sentences), during functional magnetic resonance imaging scanning. We compared the spatial activity patterns evoked by each unique 10-s segment of the different auditory excerpts across blind and sighted people. Segments of meaningful naturalistic stimuli produced distinctive activity patterns in frontotemporal networks that were shared across blind and across sighted individuals. In the blind group only, segment-specific, cross-subject patterns emerged in visual cortex, but only for meaningful naturalistic stimuli and not Backwards Speech. Spatial patterns of activity within visual cortices are sensitive to time-varying information in meaningful naturalistic auditory stimuli in a broadly similar manner across blind individuals.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC9758574 | PMC |
http://dx.doi.org/10.1093/cercor/bhac048 | DOI Listing |
J Neurosci
December 2024
Department of Psychology, University of Virginia, Charlottesville VA 22904, USA
Sensory experience during development has lasting effects on perception and neural processing. Exposing juvenile animals to artificial stimuli influences the tuning and functional organization of the auditory cortex, but less is known about how the rich acoustical environments experienced by vocal communicators affect the processing of complex vocalizations. Here, we show that in zebra finches (), a colonial-breeding songbird species, exposure to a naturalistic social-acoustical environment during development has a profound impact on auditory perceptual behavior and on cortical-level auditory responses to conspecific song.
View Article and Find Full Text PDFPLoS One
December 2024
School of Biomedical Sciences, Monash University, Melbourne, Victoria, Australia.
A central topic in neuroscience is the neural coding problem which aims to decipher how the brain signals sensory information through neural activity. Despite significant advancements in this area, the characterisation of information encoding through the precise timing of spikes in the somatosensory cortex is limited. Here, we utilised a comprehensive dataset from previous studies to identify and characterise temporal response patterns of Layer 4 neurons of the rat barrel cortex to five distinct stimuli with varying complexities: Basic, Contact, Whisking, Rough, and Smooth.
View Article and Find Full Text PDFbioRxiv
December 2024
Kresge Hearing Research Institute, Department of Otolaryngology - Head and Neck Surgery, University of Michigan, Ann Arbor, MI.
Unlabelled: Auditory masking-the interference of the encoding and processing of an acoustic stimulus imposed by one or more competing stimuli-is nearly omnipresent in daily life, and presents a critical barrier to many listeners, including people with hearing loss, users of hearing aids and cochlear implants, and people with auditory processing disorders. The perceptual aspects of masking have been actively studied for several decades, and particular emphasis has been placed on masking of speech by other speech sounds. The neural effects of such masking, especially at the subcortical level, have been much less studied, in large part due to the technical limitations of making such measurements.
View Article and Find Full Text PDFBehav Res Methods
December 2024
CAP Team, Centre de Recherche en Neurosciences de Lyon - INSERM U1028 - CNRS UMR 5292 - UCBL - UJM, 95 Boulevard Pinel, 69675, Bron, France.
Artificial intelligence techniques offer promising avenues for exploring human body features from videos, yet no freely accessible tool has reliably provided holistic and fine-grained behavioral analyses to date. To address this, we developed a machine learning tool based on a two-level approach: a first lower-level processing using computer vision for extracting fine-grained and comprehensive behavioral features such as skeleton or facial points, gaze, and action units; a second level of machine learning classification coupled with explainability providing modularity, to determine which behavioral features are triggered by specific environments. To validate our tool, we filmed 16 participants across six conditions, varying according to the presence of a person ("Pers"), a sound ("Snd"), or silence ("Rest"), and according to emotional levels using self-referential ("Self") and control ("Ctrl") stimuli.
View Article and Find Full Text PDFProc Natl Acad Sci U S A
December 2024
Committee on Computational Neuroscience, Department of Organismal Biology and Anatomy, University of Chicago, Chicago, IL 60637.
Everything that the brain sees must first be encoded by the retina, which maintains a reliable representation of the visual world in many different, complex natural scenes while also adapting to stimulus changes. This study quantifies whether and how the brain selectively encodes stimulus features about scene identity in complex naturalistic environments. While a wealth of previous work has dug into the static and dynamic features of the population code in retinal ganglion cells (RGCs), less is known about how populations form both flexible and reliable encoding in natural moving scenes.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!