Impaired Audiovisual Representation of Phonemes in Children with Developmental Language Disorder.

Brain Sci

Department of Statistics, Purdue University, 250 N. University Street, West Lafayette, IN 47907-2066, USA.

Published: April 2021

We examined whether children with developmental language disorder (DLD) differed from their peers with typical development (TD) in the degree to which they encode information about a talker's mouth shape into long-term phonemic representations. Children watched a talker's face and listened to rare changes from [i] to [u] or the reverse. In the neutral condition, the talker's face had a closed mouth throughout. In the audiovisual violation condition, the mouth shape always matched the frequent vowel, even when the rare vowel was played. We hypothesized that in the neutral condition no long-term audiovisual memory traces for speech sounds would be activated. Therefore, the neural response elicited by deviants would reflect only a violation of the observed audiovisual sequence. In contrast, we expected that in the audiovisual violation condition, a long-term memory trace for the speech sound/lip configuration typical for the frequent vowel would be activated. In this condition then, the neural response elicited by rare sound changes would reflect a violation of not only observed audiovisual patterns but also of a long-term memory representation for how a given vowel looks when articulated. Children pressed a response button whenever they saw a talker's face assume a silly expression. We found that in children with TD, rare auditory changes produced a significant mismatch negativity (MMN) event-related potential (ERP) component over the posterior scalp in the audiovisual violation condition but not in the neutral condition. In children with DLD, no MMN was present in either condition. Rare vowel changes elicited a significant P3 in both groups and conditions, indicating that all children noticed auditory changes. Our results suggest that children with TD, but not children with DLD, incorporate visual information into long-term phonemic representations and detect violations in audiovisual phonemic congruency even when they perform a task that is unrelated to phonemic processing.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8073635PMC
http://dx.doi.org/10.3390/brainsci11040507DOI Listing

Publication Analysis

Top Keywords

talker's face
12
neutral condition
12
audiovisual violation
12
violation condition
12
children
9
children developmental
8
developmental language
8
language disorder
8
mouth shape
8
long-term phonemic
8

Similar Publications

Objectives: Musicians face an increased risk of hearing loss due to prolonged and repetitive exposure to high-noise levels. Detecting early signs of hearing loss, which are subtle and often elusive to traditional clinical tests like pure-tone audiometry, is essential. The objective of this study was to investigate the impact of noise exposure on the electrophysiological and perceptual aspects of subclinical hearing damage in young musicians with normal audiometric thresholds.

View Article and Find Full Text PDF

Background: In skilled speech production, the motor system coordinates the movements of distinct sets of articulators to form precise and consistent constrictions in the vocal tract at distinct locations, across contextual variations in movement rate and amplitude. Research efforts have sought to uncover the critical control parameters governing interarticulator coordination during constriction formation, with a focus on two parameters: (a) latency of movement onset of one articulator relative to another (temporal parameters) and (b) phase angle of movement onset for one articulator relative to another (spatiotemporal parameters). Consistent interarticulator timing between jaw and tongue tip movements, during the formation of constrictions at the alveolar ridge, was previously found to scale more reliably than phase angles across variation in production rate and syllable stress.

View Article and Find Full Text PDF
Article Synopsis
  • Word identification accuracy is influenced by factors like word frequency, listening environments, and listener age, with younger and older adults showing different levels of performance, particularly in noisy settings.
  • This study investigates how both age groups perceive speech-in-noise, specifically focusing on medically related terms that vary in familiarity and frequency within simulated hospital noise, highlighting the challenges older adults face.
  • Findings revealed that older adults struggle more with low-familiarity medical words in hospital noise compared to younger adults, emphasizing the need for better communication strategies in healthcare settings.
View Article and Find Full Text PDF

Face masks provide fundamental protection against the transmission of respiratory viruses but hamper communication. We estimated auditory and visual obstacles generated by face masks on communication by measuring the neural tracking of speech. To this end, we recorded the EEG while participants were exposed to naturalistic audio-visual speech, embedded in 5-talker noise, in three contexts: (i) no-mask (audio-visual information was fully available), (ii) virtual mask (occluded lips, but intact audio), and (iii) real mask (occluded lips and degraded audio).

View Article and Find Full Text PDF

Background: The consensus in scientific literature is that each child undergoes a unique linguistic development path, albeit with shared developmental stages. Some children excel or lag behind their peers in language skills. Consequently, a key challenge in language acquisition research is pinpointing factors influencing individual differences in language development.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!