Purpose: This study was designed to determine whether within-speaker fluctuations in speech intelligibility occurred among speakers with dysarthria who produced a reading passage, and, if they did, whether selected linguistic and acoustic variables predicted the variations in speech intelligibility.
Method: Participants with dysarthria included a total of 10 persons with Parkinson's disease and amyotrophic lateral sclerosis; a control group of 10 neurologically normal speakers was also studied. Each participant read a passage that was subsequently separated into consecutive breath groups for estimates of individual breath group intelligibility. Sixty listeners participated in 2 perceptual experiments, generating intelligibility scores across speakers and for each breath group produced by speakers with dysarthria.
Results: Individual participants with dysarthria had fluctuations in intelligibility across breath groups. Breath groups of participants with dysarthria had fewer average words and reduced interquartile ranges for the 2nd formant, the latter a global measure of articulatory mobility. Regression analyses with intelligibility measures as the criterion variable and linguistic and acoustic measures as predictor variables produced significant functions both within and across speakers, but the solutions were not the same.
Conclusions: Linguistic or acoustic variables that predict across-speaker variations in speech intelligibility may not function in the same way when within-speaker variations in intelligibility are considered.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1044/1092-4388(2005/090) | DOI Listing |
J Child Lang
January 2025
ELTE-HUN-REN NAP Comparative Ethology research group, Research Centre for Natural Sciences, Institute of Cognitive Neuroscience and Psychology, Budapest, Hungary.
By comparing infant-directed speech to spouse- and dog-directed talk, we aimed to investigate how pitch and utterance length are modulated by speakers considering the speech context and the partner's expected needs and capabilities. We found that mean pitch was modulated in line with the partner's attentional needs, while pitch range and utterance length were modulated according to the partner's expected linguistic competence. In a situation with a nursery rhyme, speakers used the highest pitch and widest pitch range with all partners suggesting that infant-directed context greatly influences these acoustic features.
View Article and Find Full Text PDFBackground: Early detection of Mild Cognitive Impairment (MCI) is crucial for effective prevention. Traditional methods like expert judgment, clinical evaluations, and manual linguistic analyses are now complemented by Artificial Intelligence (AI). AI offers new avenues for identifying linguistic, facial, and acoustic markers of MCI.
View Article and Find Full Text PDFJASA Express Lett
January 2025
Department of Linguistics, Yale University, New Haven, Connecticut 06520,
This study investigates the articulatory correlates of consonantal length contrasts in Japanese mimetic words using electromagnetic articulography data. Regression and dynamic time warping analyses applied to intragestural timing, kinematic properties, and intergestural timing reveal that Japanese geminates are characterized by longer closure phases, longer gestural plateaus, higher tongue tip positions, larger movements, and lower stiffness. Geminates also exhibit distinct timing relationships with adjacent vowels, specifically, longer times to target that allow for longer preceding vowels.
View Article and Find Full Text PDFAlzheimers Dement
December 2024
Boston University Chobanian & Avedisian School of Medicine, Boston, MA, USA.
Background: Producing speech is a cognitively complex task and can be collected through devices such as handheld recorders, tablets, and smartphones. Digital voice data can also capture information at a granular millisecond-level precision and serve as a widespread tool to collect cognitively relevant data in almost any diverse real-world environments. Digital voice recordings of spoken responses to neuropsychological test questions have been collected through the Framingham Heart Study (FHS) since 2005.
View Article and Find Full Text PDFBackground: Primary progressive aphasia (PPA) is a language-based dementia linked with underlying Alzheimer's disease (AD) or frontotemporal dementia. Clinicians often report difficulty differentiating between the logopenic (lv) and nonfluent/agrammatic (nfv) subtypes, as both variants present with disruptions to "fluency" yet for different underlying reasons. In English, acoustic and linguistic markers from connected speech samples have shown promise in machine learning (ML)-based differentiation of nfv from lv.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!