Objective: This study aims to document the nature and progression of spontaneous speech impairment suffered by patients with Alzheimer's disease (AD) over a 12-month period, using both cross-sectional and prospective longitudinal design.
Methods: Thirty one mild-moderate AD patients and 30 controls matched for age and socio-cultural background completed a simple and complex oral description task at baseline. The AD patients then underwent follow-up assessments at 6 and 12 months.
Results: Cross-sectional comparisons indicated that mild-moderate AD patients produced more word-finding delays (WFDs) and empty and indefinite phrases, while producing fewer pictorial themes, repairing fewer errors, responding to fewer WFDs, produce shorter and less complex phrases and produce speech with less intonational contour than controls. However, the two groups could not be distinguished on the basis of phonological paraphasias. Longitudinal follow-up, however, suggested that phonological processing deteriorates over time, where the prevalence of phonological errors increased over 12 months. Discussion Consistent with findings from neuropsychological, neuropathological and neuroimaging studies, the language deterioration shown by the AD patients shows a pattern of impairment dominated by semantic errors, which is later joined by a disruption in the phonological aspects of speech.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1017/neu.2013.16 | DOI Listing |
Proc Conf Assoc Comput Linguist Meet
May 2022
Pharmaceutical Care and Health Systems, University of Minnesota.
Deep learning (DL) techniques involving fine-tuning large numbers of model parameters have delivered impressive performance on the task of discriminating between language produced by cognitively healthy individuals, and those with Alzheimer's disease (AD). However, questions remain about their ability to generalize beyond the small reference sets that are publicly available for research. As an alternative to fitting model parameters directly, we propose a novel method by which a Transformer DL model (GPT-2) pre-trained on general English text is paired with an artificially degraded version of itself (GPT-D), to compute the ratio between these two models' on language from cognitively healthy and impaired individuals.
View Article and Find Full Text PDFFront Hum Neurosci
January 2025
Department of Psychology, Renmin University of China, Beijing, China.
Introduction: While considerable research in language production has focused on incremental processing during conceptual and grammatical encoding, prosodic encoding remains less investigated. This study examines whether focus and accentuation processing in speech production follows linear or hierarchical incrementality.
Methods: We employed visual world eye-tracking to investigate how focus and accentuation are processed during sentence production.
Biol Psychiatry
January 2025
Psychiatry and Neuroscience Departments, Icahn School of Medicine at Mount Sinai, 1 Gustave L. Levy Place, New York City, NY, 10029; Psychiatry and Neuroscience Departments, Icahn School of Medicine at Mount Sinai, 1 Gustave L. Levy Place, New York City, NY, 10029. Electronic address:
Background: Valid scalable biomarkers for predicting longitudinal clinical outcomes in psychiatric research are crucial for optimizing intervention and prevention efforts. Here we recorded spontaneous speech from initially abstinent individuals with cocaine use disorder (iCUD) for use in predicting drug use outcomes.
Methods: At baseline, 88 iCUD provided 5-minute speech samples describing the positive consequences of quitting drug use and negative consequences of using drugs.
Lang Learn Dev
April 2024
Department of Literatures, Cultures and Languages, University of Connecticut, Storrs, Connecticut, USA.
Joint Attention (JA) and Supported Joint Engagement (Supported JE) have each been reported to predict later language development in typically developing (TD) children and children with Autism Spectrum Disorder (ASD). In this longitudinal study including 33 TD children (20 months at V1) and 30 children with ASD (33 months at V1), the contributions of JA and Supported JE to later language, assessed via standardized tests and spontaneous speech, were directly compared. Frequency and durations of JA and Supported JE episodes were coded from 30-minute interactions with caregivers; subsequent language skills were assessed two years later.
View Article and Find Full Text PDFUnlabelled: Exposure to loud and/or prolonged noise damages cochlear hair cells and triggers downstream changes in synaptic and electrical activity in multiple brain regions, resulting in hearing loss and altered speech comprehension. It remains unclear however whether or not noise exposure also compromises the cochlear efferent system, a feedback pathway in the brain that fine-tunes hearing sensitivity in the cochlea. We examined the effects of noise-induced hearing loss on the spontaneous action potential (AP) firing pattern in mouse lateral olivocochlear (LOC) neurons.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!