Newborns' neural processing of native vowels reveals directional asymmetries.

Dev Cogn Neurosci

Department of Pathological Physiology, Faculty of Medicine in Hradec Králové, Charles University, Šimkova 870, 500 03 Hradec Králové, Czechia; Department of Medical Biophysics, Medical faculty in Hradec Králové, Charles University, Šimkova 870, 500 03 Hradec Králové, Czechia.

Published: December 2021

Prenatal learning of speech rhythm and melody is well documented. Much less is known about the earliest acquisition of segmental speech categories. We tested whether newborn infants perceive native vowels, but not nonspeech sounds, through some existing (proto-)categories, and whether they do so more robustly for some vowels than for others. Sensory event-related potentials (ERP), and mismatch responses (MMR), were obtained from 104 neonates acquiring Czech. The ERPs elicited by vowels were larger than the ERPs to nonspeech sounds, and reflected the differences between the individual vowel categories. The MMRs to changes in vowels but not in nonspeech sounds revealed left-lateralized asymmetrical processing patterns: a change from a focal [a] to a nonfocal [ɛ], and the change from short [ɛ] to long [ɛ:] elicited more negative MMR responses than reverse changes. Contrary to predictions, we did not find evidence of a developmental advantage for vowel length contrasts (supposedly most readily available in utero) over vowel quality contrasts (supposedly less salient in utero). An explanation for these asymmetries in terms of differential degree of prior phonetic warping of speech sounds is proposed. Future studies with newborns with different language backgrounds should test whether the prenatal learning scenario proposed here is plausible.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8577326PMC
http://dx.doi.org/10.1016/j.dcn.2021.101023DOI Listing

Publication Analysis

Top Keywords

nonspeech sounds
12
native vowels
8
prenatal learning
8
vowels nonspeech
8
contrasts supposedly
8
vowels
5
newborns' neural
4
neural processing
4
processing native
4
vowels reveals
4

Similar Publications

Auditory sequence learning with degraded input: children with cochlear implants ('nature effect') compared to children from low and high socio-economic backgrounds ('nurture effect').

Sci Rep

March 2025

The Department of Communication Disorders, Steyer School of Health Professions, Faculty of Medicine and Health Sciences, Tel Aviv University, Tel Aviv, Israel.

Implicit sequence learning (SL) is crucial for language acquisition and has been studied in children with organic language deficits (e.g., specific language impairment).

View Article and Find Full Text PDF

Natural language sampling (NLS) offers rich insights into real-world speech and language usage across diverse groups; yet, human transcription is time-consuming and costly. Automatic speech recognition (ASR) technology has the potential to revolutionize NLS research. However, its performance in clinical-research settings with young children and those with developmental delays remains unknown.

View Article and Find Full Text PDF

Neural tracking of auditory statistical regularities in adults with and without dyslexia.

Cereb Cortex

February 2025

Next Generation Artificial Intelligence Research Center, Graduate School of Information Science and Technology, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan.

Listeners implicitly use statistical regularities to segment continuous sound input into meaningful units, eg transitional probabilities between syllables to segment a speech stream into separate words. Implicit learning of such statistical regularities in a novel stimulus stream is reflected in a synchronization of neural responses to the sequential stimulus structure. The present study aimed to test the hypothesis that neural tracking of the statistical stimulus structure is reduced in individuals with dyslexia who have weaker reading and spelling skills, and possibly also weaker statistical learning abilities in general, compared to healthy controls.

View Article and Find Full Text PDF

Objectives: If task-irrelevant sounds are present when someone is actively listening to speech, the irrelevant sounds can cause distraction, reducing word recognition performance and increasing listening effort. In some previous investigations into auditory distraction, the task-irrelevant stimuli were non-speech sounds (e.g.

View Article and Find Full Text PDF

Adaptation to sentences and melodies when making judgments along a voice-nonvoice continuum.

Atten Percept Psychophys

February 2025

Department of Psychology, University of Minnesota-Twin Cities, 75 E River Rd, Minneapolis, MN, 55455, USA.

Adaptation to constant or repetitive sensory signals serves to improve detection of novel events in the environment and to encode incoming information more efficiently. Within the auditory modality, contrastive adaptation effects have been observed within a number of categories, including voice and musical instrument type. A recent study found contrastive perceptual shifts between voice and instrument categories following repetitive presentation of adaptors consisting of either vowels or instrument tones.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!