Neural Speech Encoding in Infancy Predicts Future Language and Communication Difficulties.

Am J Speech Lang Pathol

Brain and Mind Institute and Department of Linguistics and Modern Languages, The Chinese University of Hong Kong, Hong Kong SAR, China.

Published: September 2021

AI Article Synopsis

  • The study created a new, cost-effective tool using EEG data to predict future language and communication skills in infants.
  • Researchers analyzed EEG responses from 118 infants to speech stimuli, and assessed their language development using standardized tests months later.
  • The findings showed high accuracy in predicting language outcomes, indicating that EEG can reliably forecast individual language development based on auditory neural function.

Article Abstract

Purpose This study aimed to construct an objective and cost-effective prognostic tool to forecast the future language and communication abilities of individual infants. Method Speech-evoked electroencephalography (EEG) data were collected from 118 infants during the first year of life during the exposure to speech stimuli that differed principally in fundamental frequency. Language and communication outcomes, namely four subtests of the MacArthur-Bates Communicative Development Inventories (MCDI)-Chinese version, were collected between 3 and 16 months after initial EEG testing. In the two-way classification, children were classified into those with future MCDI scores below the 25th percentile for their age group and those above the same percentile, while the three-way classification classified them into < 25th, 25th-75th, and > 75th percentile groups. Machine learning (support vector machine classification) with cross validation was used for model construction. Statistical significance was assessed. Results Across the four MCDI measures of early gestures, later gestures, vocabulary comprehension, and vocabulary production, the areas under the receiver-operating characteristic curve of the predictive models were respectively .92 ± .031, .91 ± .028, .90 ± .035, and .89 ± .039 for the two-way classification, and .88 ± .041, .89 ± .033, .85 ± .047, and .85 ± .050 for the three-way classification ( < .01 for all models). Conclusions Future language and communication variability can be predicted by an objective EEG method that indicates the function of the auditory neural pathway foundational to spoken language development, with precision sufficient for individual predictions. Longer-term research is needed to assess predictability of categorical diagnostic status. Supplemental Material https://doi.org/10.23641/asha.15138546.

Download full-text PDF

Source
http://dx.doi.org/10.1044/2021_AJSLP-21-00077DOI Listing

Publication Analysis

Top Keywords

language communication
16
future language
12
two-way classification
8
three-way classification
8
language
5
classification
5
neural speech
4
speech encoding
4
encoding infancy
4
infancy predicts
4

Similar Publications

Employing foreign caregivers: A qualitative study of the perspectives of older stroke survivors.

PLoS One

January 2025

Graduate Institute of Injury Prevention and Control, College of Public Health, Taipei Medical University, Taipei, Taiwan.

Background: Global populations are aging, and the numbers of stroke survivors is increasing. Consequently, the need for caregiver support has increased. Because of this and demographic and socioeconomic changes, foreign caregivers are increasingly in demand in many developed countries.

View Article and Find Full Text PDF

Clinical Manifestations.

Alzheimers Dement

December 2024

Department of Communication Sciences and Disorders, University of Wisconsin-Madison, Madison, WI, USA.

Background: Sensitive screening for early Alzheimer's disease (AD)-related cognitive decline are needed. Prior research links high beta-amyloid (Aβ) levels to reduced proper name (PN) retrieval in individuals without cognitive impairment. We examined whether language-related regional tau from PET associated with Logical Memory (LM) proper name recall, accounting for LM covariates.

View Article and Find Full Text PDF

Against the Phrase "Aggressive Care".

Camb Q Healthc Ethics

January 2025

Baylor College of Medicine, Center for Medical Ethics and Health Policy, Houston, TX, United States.

Language is the primary technology clinical ethicists use as they offer guidance about norms. Like any other piece of technology, to use the technology well requires attention, intention, skill, and knowledge. Word choice becomes a matter of professional practice.

View Article and Find Full Text PDF

Background: Newborn hearing screening is a physiologic screen to identify infants who may be deaf or hard of hearing (DHH) and would benefit from early intervention. Typically, an infant who does not pass the newborn hearing screen is referred for clinical audiology testing, which may be followed by genetic testing to identify the etiology of an infant's DHH.

Content: The current newborn hearing screening paradigm can miss mild cases of DHH or later-onset DHH, leaving a child at risk for unrecognized DHH, which could impact long-term language, communication, and social development.

View Article and Find Full Text PDF

Dense Paraphrasing for multimodal dialogue interpretation.

Front Artif Intell

December 2024

Computer Science Department, Brandeis University, Waltham, MA, United States.

Multimodal dialogue involving multiple participants presents complex computational challenges, primarily due to the rich interplay of diverse communicative modalities including speech, gesture, action, and gaze. These modalities interact in complex ways that traditional dialogue systems often struggle to accurately track and interpret. To address these challenges, we extend the textual enrichment strategy of Dense Paraphrasing (DP), by translating each nonverbal modality into linguistic expressions.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!