Background/objectives: The Implicit Prosody Hypothesis (IPH) posits that individuals generate internal prosodic representations during silent reading, mirroring those produced in spoken language. While converging behavioral evidence supports the IPH, the underlying neurocognitive mechanisms remain largely unknown. Therefore, this study investigated the neurophysiological markers of sensitivity to speech rhythm cues during silent word reading.
Methods: EEGs were recorded while participants silently read four-word sequences, each composed of either trochaic words (stressed on the first syllable) or iambic words (stressed on the second syllable). Each sequence was followed by a target word that was either metrically congruent or incongruent with the preceding rhythmic pattern. To investigate the effects of metrical expectancy and lexical stress type, we examined single-trial event-related potentials (ERPs) and time-frequency representations (TFRs) time-locked to target words.
Results: The results showed significant differences based on the stress pattern expectancy and type. Specifically, words that carried unexpected stress elicited larger ERP negativities between 240 and 628 ms after the word onset. Furthermore, different frequency bands were sensitive to distinct aspects of the rhythmic structure in language. Alpha activity tracked the rhythmic expectations, and theta and beta activities were sensitive to both the expected rhythms and specific locations of the stressed syllables.
Conclusions: The findings clarify neurocognitive mechanisms of phonological and lexical mental representations during silent reading using a conservative data-driven approach. Similarity with neural response patterns previously reported for spoken language contexts suggests shared neural networks for implicit and explicit speech rhythm processing, further supporting the IPH and emphasizing the centrality of prosody in reading.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11592126 | PMC |
http://dx.doi.org/10.3390/brainsci14111142 | DOI Listing |
R Soc Open Sci
September 2024
Centre for Neuroscience in Education, University of Cambridge, Cambridge, UK.
French and German poetry are classically considered to utilize fundamentally different linguistic structures to create rhythmic regularity. Their metrical rhythm structures are considered poetically to be very different. However, the biophysical and neurophysiological constraints upon the speakers of these poems are highly similar.
View Article and Find Full Text PDFQ J Exp Psychol (Hove)
December 2024
Toronto Metropolitan University, Department of Psychology.
Even with the use of hearing aids (HAs), speech in noise perception remains challenging for older adults, impacting communication and quality of life outcomes. The association between music perception and speech-in-noise (SIN) outcomes is of interest, as there is evidence that professionally trained musicians are adept listeners in noisy environments. Thus, this study explored the association between music processing, cognitive factors, and the outcome variable of SIN perception, in older adults with hearing loss.
View Article and Find Full Text PDFLogoped Phoniatr Vocol
December 2024
Speech Prosody Studies Group, Dep. of Linguistics, State Univ. of Campinas, Campinas, Brazil.
Purpose: The analysis of acoustic parameters contributes to the characterisation of human communication development throughout the lifetime. The present paper intends to analyse suprasegmental features of European Portuguese in longitudinal conversational speech samples of three male public figures in uncontrolled environments across different ages, approximately 30 years apart.
Participants And Methods: Twenty prosodic features concerning intonation, intensity, rhythm, and pause measures were extracted semi-automatically from 360 speech intervals (3-4 interviews from each speaker x 30 speech intervals x 3 speakers) lasting between 3 to 6 s.
Brain Lang
January 2025
Connecticut Institute for the Brain and Cognitive Sciences, University of Connecticut, Storrs, CT 06269, USA. Electronic address:
Spoken language experience influences brain responses to sound, but it is unclear whether this neuroplasticity is limited to speech frequencies (>100 Hz) or also affects lower gamma ranges (∼30-60 Hz). Using the frequency-following response (FFR), a far-field phase-locked response to sound, we explore whether bilingualism influences the location of the strongest response in the gamma range. Our results indicate that the strongest gamma response for bilinguals is most often at 43 Hz, compared to 51 Hz for monolinguals.
View Article and Find Full Text PDFCereb Cortex
December 2024
Instituto de Investigaciones Biológicas Clemente Estable, Department of Integrative and Computational Neurosciences, Av. Italia 3318, Montevideo, 11.600, Uruguay.
A social scene is particularly informative when people are distinguishable. To understand somebody amid a "cocktail party" chatter, we automatically index their voice. This ability is underpinned by parallel processing of vocal spectral contours from speech sounds, but it has not yet been established how this occurs in the brain's cortex.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!