When humans listen to speech, their neural activity tracks the slow amplitude fluctuations of the speech signal over time, known as the speech envelope. Studies suggest that the quality of this tracking is related to the quality of speech comprehension. However, a critical unanswered question is how envelope tracking arises and what role it plays in language development. Relatedly, its causal role in comprehension remains unclear, as some studies have found it to be present even for unintelligible speech. Using electroencephalography, we investigated whether the neural activity of newborns and 6-month-olds is able to track the speech envelope of familiar and unfamiliar languages in order to explore the developmental origins and functional role of envelope tracking. Our results show that amplitude and phase tracking take place at birth for familiar and unfamiliar languages alike, i.e. independently of prenatal experience. However, by 6 months language familiarity modulates the ability to track the amplitude of the speech envelope, while phase tracking continues to be universal. Our findings support the hypothesis that amplitude and phase tracking could represent two different neural mechanisms of oscillatory synchronisation and may thus play different roles in speech perception.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7847966 | PMC |
http://dx.doi.org/10.1016/j.dcn.2021.100915 | DOI Listing |
Sci Rep
January 2025
RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Forskningsveien 3A, Oslo, 0373, Norway.
Periodic sensory inputs entrain oscillatory brain activity, reflecting a neural mechanism that might be fundamental to temporal prediction and perception. Most environmental rhythms and patterns in human behavior, such as walking, dancing, and speech do not, however, display strict isochrony but are instead quasi-periodic. Research has shown that neural tracking of speech is driven by modulations of the amplitude envelope, especially via sharp acoustic edges, which serve as prominent temporal landmarks.
View Article and Find Full Text PDFJ Cogn Neurosci
January 2025
National Central University, Taoyuan City, Taiwan.
Pitch variation of the fundamental frequency (F0) is critical to speech understanding, especially in noisy environments. Degrading the F0 contour reduces behaviorally measured speech intelligibility, posing greater challenges for tonal languages like Mandarin Chinese where the F0 pattern determines semantic meaning. However, neural tracking of Mandarin speech with degraded F0 information in noisy environments remains unclear.
View Article and Find Full Text PDFImaging Neurosci (Camb)
April 2024
Department of Electrical Engineering, Columbia University, New York, NY, United States.
Listeners with hearing loss have trouble following a conversation in multitalker environments. While modern hearing aids can generally amplify speech, these devices are unable to tune into a target speaker without first knowing to which speaker a user aims to attend. Brain-controlled hearing aids have been proposed using auditory attention decoding (AAD) methods, but current methods use the same model to compare the speech stimulus and neural response, regardless of the dynamic overlap between talkers which is known to influence neural encoding.
View Article and Find Full Text PDFJ Neurosci
January 2025
Department of Psychology, University of Lübeck, Lübeck, Germany.
Amplitude compression is an indispensable feature of contemporary audio production and especially relevant in modern hearing aids. The cortical fate of amplitude-compressed speech signals is not well-studied, however, and may yield undesired side effects: We hypothesize that compressing the amplitude envelope of continuous speech reduces neural tracking. Yet, leveraging such a 'compression side effect' on unwanted, distracting sounds could potentially support attentive listening if effectively reducing their neural tracking.
View Article and Find Full Text PDFFront Hum Neurosci
January 2025
Center for Ear-EEG, Department of Electrical and Computer Engineering, Aarhus University, Aarhus, Denmark.
The recent progress in auditory attention decoding (AAD) methods is based on algorithms that find a relation between the audio envelope and the neurophysiological response. The most popular approach is based on the reconstruction of the audio envelope from electroencephalogram (EEG) signals. These methods are primarily based on the exogenous response driven by the physical characteristics of the stimuli.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!