Introduction: Screening for Alzheimer's disease neuropathologic change (ADNC) in individuals with atypical presentations is challenging but essential for clinical management. We trained automatic speech-based classifiers to distinguish frontotemporal dementia (FTD) patients with ADNC from those with frontotemporal lobar degeneration (FTLD).

Methods: We trained automatic classifiers with 99 speech features from 1 minute speech samples of 179 participants (ADNC = 36, FTLD = 60, healthy controls [HC] = 89). Patients' pathology was assigned based on autopsy or cerebrospinal fluid analytes. Structural network-based magnetic resonance imaging analyses identified anatomical correlates of distinct speech features.

Results: Our classifier showed 0.88 0.03 area under the curve (AUC) for ADNC versus FTLD and 0.93 0.04 AUC for patients versus HC. Noun frequency and pause rate correlated with gray matter volume loss in the limbic and salience networks, respectively.

Discussion: Brief naturalistic speech samples can be used for screening FTD patients for underlying ADNC in vivo. This work supports the future development of digital assessment tools for FTD.

Highlights: We trained machine learning classifiers for frontotemporal dementia patients using natural speech. We grouped participants by neuropathological diagnosis (autopsy) or cerebrospinal fluid biomarkers. Classifiers well distinguished underlying pathology (Alzheimer's disease vs. frontotemporal lobar degeneration) in patients. We identified important features through an explainable artificial intelligence approach. This work lays the groundwork for a speech-based neuropathology screening tool.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11095488PMC
http://dx.doi.org/10.1002/alz.13748DOI Listing

Publication Analysis

Top Keywords

natural speech
8
alzheimer's disease
8
trained automatic
8
frontotemporal dementia
8
ftd patients
8
frontotemporal lobar
8
lobar degeneration
8
speech samples
8
autopsy cerebrospinal
8
cerebrospinal fluid
8

Similar Publications

Quantifying Tinnitus Perception Improvement: Deriving the Minimal Clinically Important Difference of the Minimum Masking Level.

J Speech Lang Hear Res

January 2025

Division of Speech Pathology and Audiology, Research Institute of Audiology and Speech Pathology, College of Natural Sciences, Hallym University, Chuncheon, South Korea.

Purpose: Tools that can reliably measure changes in the perception of tinnitus following interventions are lacking. The minimum masking level, defined as the lowest level at which tinnitus is completely masked, is a candidate for quantifying changes in tinnitus perception. In this study, we aimed to determine minimal clinically important differences for minimum masking level.

View Article and Find Full Text PDF

Speech comprehension involves the dynamic interplay of multiple cognitive processes, from basic sound perception, to linguistic encoding, and finally to complex semantic-conceptual interpretations. How the brain handles the diverse streams of information processing remains poorly understood. Applying Hidden Markov Modeling to fMRI data obtained during spoken narrative comprehension, we reveal that the whole brain networks predominantly oscillate within a tripartite latent state space.

View Article and Find Full Text PDF

Cognitive component of auditory attention to natural speech events.

Front Hum Neurosci

January 2025

Center for Ear-EEG, Department of Electrical and Computer Engineering, Aarhus University, Aarhus, Denmark.

The recent progress in auditory attention decoding (AAD) methods is based on algorithms that find a relation between the audio envelope and the neurophysiological response. The most popular approach is based on the reconstruction of the audio envelope from electroencephalogram (EEG) signals. These methods are primarily based on the exogenous response driven by the physical characteristics of the stimuli.

View Article and Find Full Text PDF

Tibetan-Chinese speech-to-speech translation based on discrete units.

Sci Rep

January 2025

Key Laboratory of Ethnic Language Intelligent Analysis and Security Governance of MOE, Minzu University of China, Beijing, 100081, China.

Speech-to-speech translation (S2ST) has evolved from cascade systems which integrate Automatic Speech Recognition (ASR), Machine Translation (MT), and Text-to-Speech (TTS), to end-to-end models. This evolution has been driven by advancements in model performance and the expansion of cross-lingual speech datasets. Despite the paucity of research on Tibetan speech translation, this paper endeavors to tackle the challenge of Tibetan-to-Chinese direct speech-to-speech translation within the multi-task learning framework, employing self-supervised learning (SSL) and sequence-to-sequence model training.

View Article and Find Full Text PDF

Objective: The aim of this study was to assess hearing level of preschoolers with delayed speech in order to detect any underlying hearing loss Methods: In this research we targeted preschool children with speech delay, who have not been previously diagnosed with any medical or psychological illnesses. A total of 54 preschool speech-delayed children were audiologically assessed in our clinic in the past year. The age at time of referral ranged from two to 7.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!