Background: Fatigue is a major "invisible" symptom in people with multiple sclerosis (PwMS), which may affect speech. Automated speech analysis is an objective, rapid tool to capture digital speech biomarkers linked to functional outcomes.

Objective: To use automated speech analysis to assess multiple sclerosis (MS) fatigue metrics.

Methods: Eighty-four PwMS completed scripted and spontaneous speech tasks; fatigue was assessed with Modified Fatigue Impact Scale (MFIS). Speech was processed using an automated speech analysis pipeline (ki elements: SIGMA speech processing library) to transcribe speech and extract features. Regression models assessed associations between speech features and fatigue and validated in a separate set of 30 participants.

Results: Cohort characteristics were as follows: mean age 49.8 (standard deviation () = 13.6), 71.4% female, 85% relapsing-onset, median Expanded Disability Status Scale (EDSS) 2.5 (range: 0-6.5), mean MFIS 27.6 ( = 19.4), and 30% with MFIS > 38. MFIS moderately correlated with pitch ( = 0.32, = 0.005), pause duration ( = 0.33, = 0.007), and utterance duration ( = 0.31, = 0.0111). A logistic model using speech features from multiple tasks accurately classified MFIS in training (area under the curve (AUC) = 0.95, = 0.59, < 0.001) and test sets (AUC = 0.93, = 0.54, = 0.0222). Adjusting for EDSS, processing speed, and depression in sensitivity analyses did not impact model accuracy.

Conclusion: Fatigue may be assessed using simple, low-burden speech tasks that correlate with gold-standard subjective fatigue measures.

Download full-text PDF

Source
http://dx.doi.org/10.1177/13524585241303855DOI Listing

Publication Analysis

Top Keywords

speech
13
speech tasks
12
multiple sclerosis
12
automated speech
12
speech analysis
12
fatigue
8
fatigue assessed
8
speech features
8
mfis
5
"invisible" "audible"
4

Similar Publications

Robust text-dependent speaker verification system using gender aware Siamese-Triplet Deep Neural Network.

Network

December 2024

Department of Electronics and Communication Engineering, Dronacharya Group of Institutions, Greater Noida, UP, India.

Speaker verification in text-dependent scenarios is critical for high-security applications but faces challenges such as voice quality variations, linguistic diversity, and gender-related pitch differences, which affect authentication accuracy. This paper introduces a Gender-Aware Siamese-Triplet Network-Deep Neural Network (ST-DNN) architecture to address these challenges. The Gender-Aware Network utilizes Convolutional 2D layers with ReLU activation for initial feature extraction, followed by multi-fusion dense skip connections and batch normalization to integrate features across different depths, enhancing discrimination between male and female speakers.

View Article and Find Full Text PDF

Background: Head and neck cancer (HNC) is amongst the 10 most common cancers worldwide and has a major effect on patients' quality of life. Given the complexity of this unique group of patients, a multidisciplinary team approach is preferable. Amongst the debilitating sequels of HNC and/or its treatment, swallowing, speech and voice impairments are prevalent and require the involvement of speech-language pathologists (SLPs).

View Article and Find Full Text PDF

Introduction: Non-motor symptoms (NMS) in Parkinson's disease (PD) can fluctuate daily, impacting patient quality of life. The Non-Motor Fluctuation Assessment (NoMoFA) Questionnaire, a recently validated tool, quantifies NMS fluctuations during ON- and OFF-medication states. Our study aimed to validate the Italian version of NoMoFA, comparing its results to the original validation and further exploring its clinimetric properties.

View Article and Find Full Text PDF

Comprehension of acoustically degraded emotional prosody in Alzheimer's disease and primary progressive aphasia.

Sci Rep

December 2024

Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, 1st Floor, 8-11 Queen Square, London, WC1N 3AR, UK.

Previous research suggests that emotional prosody perception is impaired in neurodegenerative diseases like Alzheimer's disease (AD) and primary progressive aphasia (PPA). However, no previous research has investigated emotional prosody perception in these diseases under non-ideal listening conditions. We recruited 18 patients with AD, and 31 with PPA (nine logopenic (lvPPA); 11 nonfluent/agrammatic (nfvPPA) and 11 semantic (svPPA)), together with 24 healthy age-matched individuals.

View Article and Find Full Text PDF

Background: Effective staff-to-staff and patient-provider communication in the Emergency Department (ED) is essential for safe, quality care. Routine wearing of Personal-Protective-Equipment (PPE) has introduced new challenges to communication. We aimed to understand the perspectives of ED staff about communicating while wearing PPE, and to identify factors contributing to communication success, breakdown, and repair.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!