Natural sounds are characterized by complex patterns of sound intensity distributed across both frequency (spectral modulation) and time (temporal modulation). Perception of these patterns has been proposed to depend on a bank of modulation filters, each tuned to a unique combination of a spectral and a temporal modulation frequency. There is considerable physiological evidence for such combined spectrotemporal tuning. However, direct behavioral evidence is lacking. Here we examined the processing of spectrotemporal modulation behaviorally using a perceptual-learning paradigm. We trained human listeners for ∼1 h/d for 7 d to discriminate the depth of spectral (0.5 cyc/oct; 0 Hz), temporal (0 cyc/oct; 32 Hz), or upward spectrotemporal (0.5 cyc/oct; 32 Hz) modulation. Each trained group learned more on their respective trained condition than did controls who received no training. Critically, this depth-discrimination learning did not generalize to the trained stimuli of the other groups or to downward spectrotemporal (0.5 cyc/oct; -32 Hz) modulation. Learning on discrimination also led to worsening on modulation detection, but only when the same spectrotemporal modulation was used for both tasks. Thus, these influences of training were specific to the trained combination of spectral and temporal modulation frequencies, even when the trained and untrained stimuli had one modulation frequency in common. This specificity indicates that training modified circuitry that had combined spectrotemporal tuning, and therefore that circuits with such tuning can influence perception. These results are consistent with the possibility that the auditory system analyzes sounds through filters tuned to combined spectrotemporal modulation.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3519395 | PMC |
http://dx.doi.org/10.1523/JNEUROSCI.5732-11.2012 | DOI Listing |
Hum Brain Mapp
February 2025
Université libre de Bruxelles (ULB), UNI - ULB Neuroscience Institute, Laboratoire de Neuroanatomie et Neuroimagerie translationnelles (LN2T), Brussels, Belgium.
Language control processes allow for the flexible manipulation and access to context-appropriate verbal representations. Functional magnetic resonance imaging (fMRI) studies have localized the brain regions involved in language control processes usually by comparing high vs. low lexical-semantic control conditions during verbal tasks.
View Article and Find Full Text PDFPLoS Comput Biol
January 2025
Department of Electrical and Computer Engineering, University of Maryland, College Park, Maryland, United States of America.
Characterizing neuronal responses to natural stimuli remains a central goal in sensory neuroscience. In auditory cortical neurons, the stimulus selectivity of elicited spiking activity is summarized by a spectrotemporal receptive field (STRF) that relates neuronal responses to the stimulus spectrogram. Though effective in characterizing primary auditory cortical responses, STRFs of non-primary auditory neurons can be quite intricate, reflecting their mixed selectivity.
View Article and Find Full Text PDFJ Acoust Soc Am
November 2024
School of Psychology and Humanities, University of Central Lancashire, Preston, PR1 2HE, United Kingdom.
Two competing accounts propose that the disruption of short-term memory by irrelevant speech arises either due to interference-by-process (e.g., changing-state effect) or attentional capture, but it is unclear how whispering affects the irrelevant speech effect.
View Article and Find Full Text PDFbioRxiv
November 2024
Oregon Hearing Research Center, Oregon Health and Science University, Portland, OR 97239, USA.
Commun Biol
November 2024
Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, USA.
Current tests of hearing fail to diagnose pathologies in ~10% of patients seeking help for hearing difficulties. Neural ensemble responses to perceptually relevant cues in the amplitude envelope, termed envelope following responses (EFR), hold promise as an objective diagnostic tool to probe these 'hidden' hearing difficulties. But clinical translation is impeded by current measurement approaches involving static amplitude modulated (AM) tones, which are time-consuming and lack optimal spectrotemporal resolution.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!