Perceptual learning evidence for tuning to spectrotemporal modulation in the human auditory system.

J Neurosci

Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL 60208, USA.

Published: May 2012

Natural sounds are characterized by complex patterns of sound intensity distributed across both frequency (spectral modulation) and time (temporal modulation). Perception of these patterns has been proposed to depend on a bank of modulation filters, each tuned to a unique combination of a spectral and a temporal modulation frequency. There is considerable physiological evidence for such combined spectrotemporal tuning. However, direct behavioral evidence is lacking. Here we examined the processing of spectrotemporal modulation behaviorally using a perceptual-learning paradigm. We trained human listeners for ∼1 h/d for 7 d to discriminate the depth of spectral (0.5 cyc/oct; 0 Hz), temporal (0 cyc/oct; 32 Hz), or upward spectrotemporal (0.5 cyc/oct; 32 Hz) modulation. Each trained group learned more on their respective trained condition than did controls who received no training. Critically, this depth-discrimination learning did not generalize to the trained stimuli of the other groups or to downward spectrotemporal (0.5 cyc/oct; -32 Hz) modulation. Learning on discrimination also led to worsening on modulation detection, but only when the same spectrotemporal modulation was used for both tasks. Thus, these influences of training were specific to the trained combination of spectral and temporal modulation frequencies, even when the trained and untrained stimuli had one modulation frequency in common. This specificity indicates that training modified circuitry that had combined spectrotemporal tuning, and therefore that circuits with such tuning can influence perception. These results are consistent with the possibility that the auditory system analyzes sounds through filters tuned to combined spectrotemporal modulation.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3519395PMC
http://dx.doi.org/10.1523/JNEUROSCI.5732-11.2012DOI Listing

Publication Analysis

Top Keywords

spectrotemporal modulation
16
modulation
13
temporal modulation
12
combined spectrotemporal
12
spectrotemporal
8
auditory system
8
filters tuned
8
combination spectral
8
spectral temporal
8
modulation frequency
8

Similar Publications

Investigating the Spatio-Temporal Signatures of Language Control-Related Brain Synchronization Processes.

Hum Brain Mapp

February 2025

Université libre de Bruxelles (ULB), UNI - ULB Neuroscience Institute, Laboratoire de Neuroanatomie et Neuroimagerie translationnelles (LN2T), Brussels, Belgium.

Language control processes allow for the flexible manipulation and access to context-appropriate verbal representations. Functional magnetic resonance imaging (fMRI) studies have localized the brain regions involved in language control processes usually by comparing high vs. low lexical-semantic control conditions during verbal tasks.

View Article and Find Full Text PDF

Sparse high-dimensional decomposition of non-primary auditory cortical receptive fields.

PLoS Comput Biol

January 2025

Department of Electrical and Computer Engineering, University of Maryland, College Park, Maryland, United States of America.

Characterizing neuronal responses to natural stimuli remains a central goal in sensory neuroscience. In auditory cortical neurons, the stimulus selectivity of elicited spiking activity is summarized by a spectrotemporal receptive field (STRF) that relates neuronal responses to the stimulus spectrogram. Though effective in characterizing primary auditory cortical responses, STRFs of non-primary auditory neurons can be quite intricate, reflecting their mixed selectivity.

View Article and Find Full Text PDF

Two competing accounts propose that the disruption of short-term memory by irrelevant speech arises either due to interference-by-process (e.g., changing-state effect) or attentional capture, but it is unclear how whispering affects the irrelevant speech effect.

View Article and Find Full Text PDF
Article Synopsis
  • The auditory cortex processes complex sound features using convolutional neural networks (CNNs), which offer improved prediction of neural activity from natural sounds compared to traditional models.
  • A novel method visualizes the tuning subspace of CNNs, allowing researchers to analyze how different filters predict neuronal responses to various stimuli, achieving similar accuracy to complete models.
  • The findings revealed diverse nonlinear neural responses and how local neuron populations organize themselves within the tuning subspace, highlighting specific patterns related to neuron types and their roles in the auditory processing circuit.
View Article and Find Full Text PDF

Current tests of hearing fail to diagnose pathologies in ~10% of patients seeking help for hearing difficulties. Neural ensemble responses to perceptually relevant cues in the amplitude envelope, termed envelope following responses (EFR), hold promise as an objective diagnostic tool to probe these 'hidden' hearing difficulties. But clinical translation is impeded by current measurement approaches involving static amplitude modulated (AM) tones, which are time-consuming and lack optimal spectrotemporal resolution.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!