We examined whether children with developmental language disorder (DLD) differed from their peers with typical development (TD) in the degree to which they encode information about a talker's mouth shape into long-term phonemic representations. Children watched a talker's face and listened to rare changes from [i] to [u] or the reverse. In the neutral condition, the talker's face had a closed mouth throughout. In the audiovisual violation condition, the mouth shape always matched the frequent vowel, even when the rare vowel was played. We hypothesized that in the neutral condition no long-term audiovisual memory traces for speech sounds would be activated. Therefore, the neural response elicited by deviants would reflect only a violation of the observed audiovisual sequence. In contrast, we expected that in the audiovisual violation condition, a long-term memory trace for the speech sound/lip configuration typical for the frequent vowel would be activated. In this condition then, the neural response elicited by rare sound changes would reflect a violation of not only observed audiovisual patterns but also of a long-term memory representation for how a given vowel looks when articulated. Children pressed a response button whenever they saw a talker's face assume a silly expression. We found that in children with TD, rare auditory changes produced a significant mismatch negativity (MMN) event-related potential (ERP) component over the posterior scalp in the audiovisual violation condition but not in the neutral condition. In children with DLD, no MMN was present in either condition. Rare vowel changes elicited a significant P3 in both groups and conditions, indicating that all children noticed auditory changes. Our results suggest that children with TD, but not children with DLD, incorporate visual information into long-term phonemic representations and detect violations in audiovisual phonemic congruency even when they perform a task that is unrelated to phonemic processing.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8073635 | PMC |
http://dx.doi.org/10.3390/brainsci11040507 | DOI Listing |
Ear Hear
November 2024
Department of Speech Language Pathology & Audiology, Towson University, Towson, Maryland, USA.
Objectives: Musicians face an increased risk of hearing loss due to prolonged and repetitive exposure to high-noise levels. Detecting early signs of hearing loss, which are subtle and often elusive to traditional clinical tests like pure-tone audiometry, is essential. The objective of this study was to investigate the impact of noise exposure on the electrophysiological and perceptual aspects of subclinical hearing damage in young musicians with normal audiometric thresholds.
View Article and Find Full Text PDFJ Speech Lang Hear Res
January 2025
Department of Speech, Language, and Hearing Sciences, University of Florida, Gainesville.
Background: In skilled speech production, the motor system coordinates the movements of distinct sets of articulators to form precise and consistent constrictions in the vocal tract at distinct locations, across contextual variations in movement rate and amplitude. Research efforts have sought to uncover the critical control parameters governing interarticulator coordination during constriction formation, with a focus on two parameters: (a) latency of movement onset of one articulator relative to another (temporal parameters) and (b) phase angle of movement onset for one articulator relative to another (spatiotemporal parameters). Consistent interarticulator timing between jaw and tongue tip movements, during the formation of constrictions at the alveolar ridge, was previously found to scale more reliably than phase angles across variation in production rate and syllable stress.
View Article and Find Full Text PDFCogn Res Princ Implic
December 2024
Division of Geriatrics, Gerontology and Palliative Medicine, University of Nebraska Medical Center Department of Internal Medicine, Omaha, USA.
Heliyon
August 2024
MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy.
Face masks provide fundamental protection against the transmission of respiratory viruses but hamper communication. We estimated auditory and visual obstacles generated by face masks on communication by measuring the neural tracking of speech. To this end, we recorded the EEG while participants were exposed to naturalistic audio-visual speech, embedded in 5-talker noise, in three contexts: (i) no-mask (audio-visual information was fully available), (ii) virtual mask (occluded lips, but intact audio), and (iii) real mask (occluded lips and degraded audio).
View Article and Find Full Text PDFF1000Res
August 2024
2Department of Neuroscience, Imaging and Clinical Sciences, Gabriele d'Annunzio University of Chieti and Pescara, Chieti, Abruzzo, Italy.
Background: The consensus in scientific literature is that each child undergoes a unique linguistic development path, albeit with shared developmental stages. Some children excel or lag behind their peers in language skills. Consequently, a key challenge in language acquisition research is pinpointing factors influencing individual differences in language development.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!