There have been two reported cases of neuroleptic malignant syndrome (NMS) in combination with dysarthric disorders. In both cases the NMS was phenomenologically related to the malignant dopamine withdrawal syndrome and to the akinetic crisis of parkinsonism. The reported dysarthric disorders are to be interpreted as a differential-diagnostic sign of the exclusion of a permicious catatonia.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1055/s-2007-979575 | DOI Listing |
J Speech Lang Hear Res
January 2025
Department of Communicative Disorders and Deaf Education, Utah State University, Logan.
Purpose: In effortful listening conditions, speech perception and adaptation abilities are constrained by aging and often linked to age-related hearing loss and cognitive decline. Given that older adults are frequent communication partners of individuals with dysarthria, the current study examines cognitive-linguistic and hearing predictors of dysarthric speech perception and adaptation in older listeners.
Method: Fifty-eight older adult listeners (aged 55-80 years) completed a battery of hearing and cognitive tasks administered via the National Institutes of Health Toolbox.
Diagnostics (Basel)
November 2024
College of Medicine, National Chung Hsing University, Taichung 402202, Taiwan.
Dysarthria, a motor speech disorder caused by neurological damage, significantly hampers speech intelligibility, creating communication barriers for affected individuals. Voice conversion (VC) systems have been developed to address this, yet accurately predicting phonemes in dysarthric speech remains a challenge due to its variability. This study proposes a novel approach that integrates Fuzzy Expectation Maximization (FEM) with diffusion models for enhanced phoneme prediction, aiming to improve the quality of dysarthric voice conversion.
View Article and Find Full Text PDFAm J Speech Lang Pathol
December 2024
Department of Communicative Disorders and Deaf Education, Utah State University, Logan.
Purpose: The purpose of the current study was to develop and test extensions to Autoscore, an automated approach for scoring listener transcriptions against target stimuli, for scoring the Speech Intelligibility Test (SIT), a widely used test for quantifying intelligibility in individuals with dysarthria.
Method: Three main extensions to Autoscore were created including a compound rule, a contractions rule, and a numbers rule. We used two sets of previously collected listener SIT transcripts ( = 4,642) from databases of dysarthric speakers to evaluate the accuracy of the Autoscore SIT extensions.
Sci Rep
November 2024
Department of Mechatronics Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, Karnataka, India.
Dysarthria, a motor speech disorder that impacts articulation and speech clarity, presents significant challenges for Automatic Speech Recognition (ASR) systems. This study proposes a groundbreaking approach to enhance the accuracy of Dysarthric Speech Recognition (DSR). A primary innovation lies in the integration of the SepFormer-Speech Enhancement Generative Adversarial Network (S-SEGAN), an advanced generative adversarial network tailored for Dysarthric Speech Enhancement (DSE), as a front-end processing stage for DSR systems.
View Article and Find Full Text PDFAm J Speech Lang Pathol
January 2025
Department of Communication Science and Disorders, Florida State University, Tallahassee.
Purpose: Perceptual training offers a promising, listener-targeted option for improving intelligibility of dysarthric speech. Cognitive resources are required for learning, and theoretical models of listening effort and engagement account for a role of listener motivation in allocation of such resources. Here, we manipulate training instructions to enhance motivation to test the hypothesis that increased motivation increases the intelligibility benefits of perceptual training.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!