Non-fluent speech is one of the most common impairments in post-stroke aphasia. The rehabilitation of non-fluent speech in aphasia is particularly challenging as patients are rarely able to produce and practice fluent speech production. Speech entrainment is a behavioural technique that enables patients with non-fluent aphasia to speak fluently. However, its mechanisms are not well understood and the level of improved fluency with speech entrainment varies among individuals with non-fluent aphasia. In this study, we evaluated the behavioural and neuroanatomical factors associated with better speech fluency with the aid of speech entrainment during the training phase of speech entrainment. We used a lesion-symptom mapping approach to define the relationship between chronic stroke location on MRI and the number of different words per second produced during speech entrainment versus picture description spontaneous speech. The behavioural variable of interest was the speech entrainment/picture description ratio, which, if ≥1, indicated an increase in speech output during speech entrainment compared to picture description. We used machine learning (shallow neural network) to assess the statistical significance and out-of-sample predictive accuracy of the neuroanatomical model, and its regional contributors. We observed that better assisted speech (higher speech entrainment/picture description ratio) was achieved by individuals who had preservation of the posterior middle temporal gyrus, inferior fronto-occipital fasciculus and uncinate fasciculus, while exhibiting lesions in areas typically associated with non-fluent aphasia, such as the superior longitudinal fasciculus, precentral, inferior frontal, supramarginal and insular cortices. Our findings suggest that individuals with dorsal stream damage but preservation of ventral stream structures are more likely to achieve more fluent speech with the aid of speech entrainment compared to spontaneous speech. This observation provides insight into the mechanisms of non-fluent speech in aphasia and has potential implications for future research using speech entrainment for rehabilitation of non-fluent aphasia.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6885692PMC
http://dx.doi.org/10.1093/brain/awz309DOI Listing

Publication Analysis

Top Keywords

speech entrainment
32
speech
21
non-fluent aphasia
16
non-fluent speech
12
speech fluency
8
rehabilitation non-fluent
8
speech aphasia
8
fluent speech
8
entrainment
8
aid speech
8

Similar Publications

Humans rarely speak without producing co-speech gestures of the hands, head, and other parts of the body. Co-speech gestures are also highly restricted in how they are timed with speech, typically synchronizing with prosodically-prominent syllables. What functional principles underlie this relationship? Here, we examine how the production of co-speech manual gestures influences spatiotemporal patterns of the oral articulators during speech production.

View Article and Find Full Text PDF

Transient disruption or permanent damage to the left Frontal Aslant Tract (FAT) is associated with deficits in speech production. The present study examined the application of theta (4 Hz) high-definition transcranial alternating current stimulation (HD-tACS) over the left SMA and IFG -as a part of FAT- as a potential multisite protocol to modulate neural and behavioral correlates of speech motor control. Twenty-one young adults participated in three counterbalanced sessions in which they received in-phase, anti-phase, and sham theta HD-tACS.

View Article and Find Full Text PDF

. Speech comprehension involves detecting words and interpreting their meaning according to the preceding semantic context. This process is thought to be underpinned by a predictive neural system that uses that context to anticipate upcoming words.

View Article and Find Full Text PDF

Concurrent processing of the prosodic hierarchy is supported by cortical entrainment and phase-amplitude coupling.

Cereb Cortex

December 2024

Institute for the Interdisciplinary Study of Language Evolution, University of Zurich, Affolternstrasse 56, 8050 Zürich, Switzerland.

Models of phonology posit a hierarchy of prosodic units that is relatively independent from syntactic structure, requiring its own parsing. It remains unexplored how this prosodic hierarchy is represented in the brain. We investigated this foundational question by means of an electroencephalography (EEG) study.

View Article and Find Full Text PDF

The slowing and reduction of auditory responses in the brain are recognized side effects of increased pure tone thresholds, impaired speech recognition, and aging. However, it remains controversial whether central slowing is primarily linked to brain processes as atrophy, or is also associated with the slowing of temporal neural processing from the periphery. Here we analyzed electroencephalogram (EEG) responses that most likely reflect medial geniculate body (MGB) responses to passive listening of phonemes in 80 subjects ranging in age from 18 to 76 years, in whom the peripheral auditory responses had been analyzed in detail (Schirmer et al.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!