While previous studies on language processing highlighted several ERP components in relation to specific stages of sound and speech processing, no study has yet combined them to obtain a comprehensive picture of language abilities in a single session. Here, we propose a novel task-free paradigm aimed at assessing multiple levels of speech processing by combining various speech and nonspeech sounds in an adaptation of a multifeature passive oddball design. We recorded EEG in healthy adult participants, who were presented with these sounds in the absence of sound-directed attention while being engaged in a primary visual task. This produced a range of responses indexing various levels of sound processing and language comprehension: (a) P1-N1 complex, indexing obligatory auditory processing; (b) P3-like dynamics associated with involuntary attention allocation for unusual sounds; (c) enhanced responses for native speech (as opposed to nonnative phonemes) from ∼50 ms from phoneme onset, indicating phonological processing; (d) amplitude advantage for familiar real words as opposed to meaningless pseudowords, indexing automatic lexical access; (e) topographic distribution differences in the cortical activation of action verbs versus concrete nouns, likely linked with the processing of lexical semantics. These multiple indices of speech-sound processing were acquired in a single attention-free setup that does not require any task or subject cooperation; subject to future research, the present protocol may potentially be developed into a useful tool for assessing the status of auditory and linguistic functions in uncooperative or unresponsive participants, including a range of clinical or developmental populations.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1111/psyp.13216 | DOI Listing |
J Speech Lang Hear Res
January 2025
School of Humanities, Shenzhen University, China.
Purpose: This study investigated the influence of vowel quality on loudness perception and stress judgment in Mongolian, an agglutinative language with free word stress. We aimed to explore the effects of intrinsic vowel features, presentation order, and intensity conditions on loudness perception and stress assignment.
Method: Eight Mongolian short vowel phonemes (/ɐ/, /ə/, /i/, /ɪ/, /ɔ/, /o/, /ʊ/, and /u/) were recorded by a native Mongolian speaker of the Urad subdialect (the Chahar dialect group) in Inner Mongolia.
J Speech Lang Hear Res
January 2025
Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China.
Purpose: Neurotypical individuals show a robust "global precedence effect (GPE)" when processing hierarchically structured visual information. However, the auditory domain remains understudied. The current research serves to fill the knowledge gap on auditory global-local processing across the broader autism phenotype under the tonal language background.
View Article and Find Full Text PDFJ Speech Lang Hear Res
January 2025
Aix-Marseille Université, INSERM, INS, Institut de Neurosciences des Systèmes, France.
Purpose: Prelingual deaf children with cochlear implants show lower digit span test scores compared to normal-hearing peers, suggesting a working memory impairment. To pinpoint more precisely the subprocesses responsible for this impairment, we designed a sequence reproduction task with varying length (two to six stimuli), modality (auditory or visual), and compressibility (sequences with more or less regular patterns). Results on 22 school-age children with cochlear implants and 21 normal-hearing children revealed a deficit of children with cochlear implants only in the auditory modality.
View Article and Find Full Text PDFLang Speech
January 2025
Department of Communication Sciences and Disorders, University of Haifa, Israel.
This study investigated the role of systematicity in word learning, focusing on Semitic morpho-phonology where words exhibit multiple levels of systematicity. Building upon previous research on phonological templates, we explored how systematicity based on such templates, whether they encode meanings or not, influenced word learning in preschool-age Hebrew-speaking children. We examined form-meaning systematicity, where words share phonological templates and carry similar categorical meanings of manner-of-motion (e.
View Article and Find Full Text PDFSci Rep
January 2025
RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Forskningsveien 3A, Oslo, 0373, Norway.
Periodic sensory inputs entrain oscillatory brain activity, reflecting a neural mechanism that might be fundamental to temporal prediction and perception. Most environmental rhythms and patterns in human behavior, such as walking, dancing, and speech do not, however, display strict isochrony but are instead quasi-periodic. Research has shown that neural tracking of speech is driven by modulations of the amplitude envelope, especially via sharp acoustic edges, which serve as prominent temporal landmarks.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!