Speech perception depends on the dynamic interplay of bottom-up and top-down information along a hierarchically organized cortical network. Here, we test, for the first time in the human brain, whether neural processing of attended speech is dynamically modulated by task demand using a context-free discrimination paradigm. Electroencephalographic signals were recorded during 3 parallel experiments that differed only in the phonological feature of discrimination (word, vowel, and lexical tone, respectively). The event-related potentials (ERPs) revealed the task modulation of speech processing at approximately 200 ms (P2) after stimulus onset, probably influencing what phonological information to retain in memory. For the phonological comparison of sequential words, task modulation occurred later at approximately 300 ms (N3 and P3), reflecting the engagement of task-specific cognitive processes. The ERP results were consistent with the changes in delta-theta neural oscillations, suggesting the involvement of cortical tracking of speech envelopes. The study thus provides neurophysiological evidence for goal-oriented modulation of attended speech and calls for speech perception models incorporating limited memory capacity and goal-oriented optimization mechanisms.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1093/cercor/bhac315 | DOI Listing |
J Speech Lang Hear Res
January 2025
Department of Special Education, Central China Normal University, Wuhan.
Purpose: This cross-sectional study explored how the speechreading ability of adults with hearing impairment (HI) in China would affect their perception of the four Mandarin Chinese lexical tones: high (Tone 1), rising (Tone 2), falling-rising (Tone 3), and falling (Tone 4). We predicted that higher speechreading ability would result in better tone performance and that accuracy would vary among individual tones.
Method: A total of 136 young adults with HI (ages 18-25 years) in China participated in the study and completed Chinese speechreading and tone awareness tests.
J Speech Lang Hear Res
January 2025
Division of Speech Pathology and Audiology, Research Institute of Audiology and Speech Pathology, College of Natural Sciences, Hallym University, Chuncheon, South Korea.
Purpose: Tools that can reliably measure changes in the perception of tinnitus following interventions are lacking. The minimum masking level, defined as the lowest level at which tinnitus is completely masked, is a candidate for quantifying changes in tinnitus perception. In this study, we aimed to determine minimal clinically important differences for minimum masking level.
View Article and Find Full Text PDFElife
January 2025
State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University & IDG/McGovern Institute for Brain Research, Beijing, China.
Speech comprehension involves the dynamic interplay of multiple cognitive processes, from basic sound perception, to linguistic encoding, and finally to complex semantic-conceptual interpretations. How the brain handles the diverse streams of information processing remains poorly understood. Applying Hidden Markov Modeling to fMRI data obtained during spoken narrative comprehension, we reveal that the whole brain networks predominantly oscillate within a tripartite latent state space.
View Article and Find Full Text PDFJ Acoust Soc Am
January 2025
Department of Electronics Engineering, Pusan National University, Busan, South Korea.
The amount of information contained in speech signals is a fundamental concern of speech-based technologies and is particularly relevant in speech perception. Measuring the mutual information of actual speech signals is non-trivial, and quantitative measurements have not been extensively conducted to date. Recent advancements in machine learning have made it possible to directly measure mutual information using data.
View Article and Find Full Text PDFJ Acoust Soc Am
January 2025
Leiden University Centre for Linguistics, Leiden University, Leiden, The Netherlands.
Previous studies suggested that pitch characteristics of lexical tones in Standard Chinese influence various sensory perceptions, but whether they iconically bias emotional experience remained unclear. We analyzed the arousal and valence ratings of bi-syllabic words in two corpora (Study 1) and conducted an affect rating experiment using a carefully designed corpus of bi-syllabic words (Study 2). Two-alternative forced-choice tasks further tested the robustness of lexical tones' affective iconicity in an auditory nonce word context (Study 3).
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!