Long-term phonemic representations become audiovisual by mid-childhood.

Neuropsychologia

Department of Statistics, 250 N. University Street, West Lafayette, IN, 47907-2066, USA; Department of Human Development and Family Studies, 1202 West State St, West Lafayette, IN, 47907-2055, USA.

Published: September 2023

In earlier work with adults, we showed that long-term phonemic representations are audiovisual, meaning that they contain information on typical mouth shape during articulation. Many aspects of audiovisual processing have a prolonged developmental course, often not reaching maturity until late adolescence. In this study, we examined the status of phonemic representations in two groups of children - 8-9-year-olds and 11-12-year-olds. We used the same audiovisual oddball paradigm as in the earlier study with adults (Kaganovich and Christ, 2021). On each trial, participants saw a face and heard one of two vowels. One vowel occurred frequently (standard), while another occurred rarely (deviant). In one condition (neutral), the face had a closed, non-articulating mouth. In the other condition (audiovisual violation), the mouth shape matched the frequent vowel. Although stimuli were audiovisual in both conditions, we hypothesized that identical auditory changes would be perceived differently by participants. Namely, in the neutral condition, deviants violated only the audiovisual pattern specific to each experimental block. By contrast, in the audiovisual violation condition, deviants additionally violated long-term representations for how a speaker's mouth looks during articulation. We compared the amplitude of MMN and P3 components elicited by deviants in the two conditions. In the 11-12-year-old group, the pattern of neural responses was similar to that in adults - namely, they had a larger MMN component in the audiovisual compared to neutral condition, with no major difference in the P3 amplitude. In contrast, in the 8-9-year-old group, we saw a posterior MMN in the neutral condition only and a larger P3 in the audiovisual violation compared to the neutral condition. The larger P3 in the audiovisual violation condition suggests that younger children did perceive deviants as being more attention-grabbing when they violated the typical combination of sound and mouth shape. Yet, at this age, the earlier, more automatic stages of phonemic processing indexed by the MMN component may not yet encode visual speech elements the same way they do in older children and adults. We conclude that phonemic representations do not become audiovisual until 11-12 years of age.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10530328PMC
http://dx.doi.org/10.1016/j.neuropsychologia.2023.108633DOI Listing

Publication Analysis

Top Keywords

phonemic representations
16
audiovisual violation
16
neutral condition
16
audiovisual
12
representations audiovisual
12
mouth shape
12
long-term phonemic
8
condition
8
condition deviants
8
violation condition
8

Similar Publications

Simulating Early Phonetic and Word Learning Without Linguistic Categories.

Dev Sci

March 2025

Laboratoire de Sciences Cognitives et de Psycholinguistique, Département d'Études Cognitives, ENS, EHESS, CNRS, PSL University, Paris, France.

Before they even talk, infants become sensitive to the speech sounds of their native language and recognize the auditory form of an increasing number of words. Traditionally, these early perceptual changes are attributed to an emerging knowledge of linguistic categories such as phonemes or words. However, there is growing skepticism surrounding this interpretation due to limited evidence of category knowledge in infants.

View Article and Find Full Text PDF

Depression Detection of Speech is widely applied due to its ease of acquisition and imbuing with emotion. However, there exist challenges in effectively segmenting and integrating depressed speech segments. Multiple merges can also lead to blurred original information.

View Article and Find Full Text PDF
Article Synopsis
  • Cochlear implants help restore speech understanding in people with severe hearing loss, but how users perceive sounds compared to normal hearing is still unclear.
  • A study examined the brain's response to speech sounds (phoneme-related potentials) in both cochlear implant users and normal hearing individuals, focusing on attention effects.
  • Results showed similar early responses in both groups, but cochlear implant users had reduced activity for later responses, suggesting potential areas for improving speech assessment and tailored rehabilitation strategies.
View Article and Find Full Text PDF

Deep-learning models reveal how context and listener attention shape electrophysiological correlates of speech-to-language transformation.

PLoS Comput Biol

November 2024

Department of Neuroscience and Del Monte Institute for Neuroscience, University of Rochester, Rochester, New York, United States of America.

Article Synopsis
  • The human brain transforms continuous speech into words by interpreting various factors like intonation and accents, and this process can be modeled using EEG recordings.
  • Contemporary models tend to overlook how sounds are categorized in the brain, limiting our understanding of speech processing.
  • The study finds that deep-learning systems like Whisper improve EEG modeling of speech comprehension by incorporating context and demonstrating that linguistic structure is crucial for accurate brain function representation, especially in complex listening environments.
View Article and Find Full Text PDF

Recent research has shown that children as young as 19 months demonstrate graded sensitivity to mispronunciations in consonant onsets and vowels in word recognition tasks. This is evident in their progressively diminishing attention to relevant objects (e.g.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!