Auditory event-related potentials index faster processing of natural speech but not synthetic speech over nonspeech analogs in children.

Brain Lang

Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Ave S., Nashville, TN, USA; Department of Psychiatry and Behavioral Sciences, Vanderbilt Psychiatric Hospital, 1601 23rd Ave. S, Nashville, TN, USA; Vanderbilt Kennedy Center, 110 Magnolia Cir, Nashville, TN, USA; Vanderbilt Brain Institute, 6133 Medical Research Building III, 465 21st Avenue S., Nashville, TN, USA.

Published: August 2020

Given the crucial role of speech sounds in human language, it may be beneficial for speech to be supported by more efficient auditory and attentional neural processing mechanisms compared to nonspeech sounds. However, previous event-related potential (ERP) studies have found either no differences or slower auditory processing of speech than nonspeech, as well as inconsistent attentional processing. We hypothesized that this may be due to the use of synthetic stimuli in past experiments. The present study measured ERP responses during passive listening to both synthetic and natural speech and complexity-matched nonspeech analog sounds in 22 8-11-year-old children. We found that although children were more likely to show immature auditory ERP responses to the more complex natural stimuli, ERP latencies were significantly faster to natural speech compared to cow vocalizations, but were significantly slower to synthetic speech compared to tones. The attentional results indicated a P3a orienting response only to the cow sound, and we discuss potential methodological reasons for this. We conclude that our results support more efficient auditory processing of natural speech sounds in children, though more research with a wider array of stimuli will be necessary to confirm these results. Our results also highlight the importance of using natural stimuli in research investigating the neurobiology of language.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.bandl.2020.104825DOI Listing

Publication Analysis

Top Keywords

natural speech
16
speech
9
processing natural
8
synthetic speech
8
speech nonspeech
8
speech sounds
8
efficient auditory
8
auditory processing
8
erp responses
8
natural stimuli
8

Similar Publications

A collicular map for touch-guided tongue control.

Nature

January 2025

Department of Neurobiology and Behavior, Cornell University, Ithaca, NY, USA.

Accurate goal-directed behaviour requires the sense of touch to be integrated with information about body position and ongoing motion. Behaviours such as chewing, swallowing and speech critically depend on precise tactile events on a rapidly moving tongue, but neural circuits for dynamic touch-guided tongue control are unknown. Here, using high-speed videography, we examined three-dimensional lingual kinematics as mice drank from a water spout that unexpectedly changed position during licking, requiring re-aiming in response to subtle contact events on the left, centre or right surface of the tongue.

View Article and Find Full Text PDF

The Words Children Hear and See: Lexical Diversity Across-Modalities and Its Impact on Lexical Development.

Dev Sci

March 2025

Shanghai Key Laboratory of Brain Functional Genomics (Ministry of Education), Affiliated Mental Health Center (ECNU), Institute of Brain and Education Innovation, School of Psychology and Cognitive Science, East China Normal University, Shanghai, China.

Early vocabulary development benefits from diverse lexical exposures within children's language environment. However, the influence of lexical diversity on children as they enter middle childhood and are exposed to multimodal language inputs remains unclear. This study evaluates global and local aspects of lexical diversity in three 1.

View Article and Find Full Text PDF

Speech change is a biometric marker for Parkinson's disease (PD). However, evaluating speech variability across diverse languages is challenging. We aimed to develop a cross-language algorithm differentiating between PD patients and healthy controls using a Taiwanese and Korean speech data set.

View Article and Find Full Text PDF

Cross-device and test-retest reliability of speech acoustic measurements derived from consumer-grade mobile recording devices.

Behav Res Methods

December 2024

Anhui Province Key Laboratory of Medical Physics and Technology, Institute of Health and Medical Technology, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei, China.

In recent years, there has been growing interest in remote speech assessment through automated speech acoustic analysis. While the reliability of widely used features has been validated in professional recording settings, it remains unclear how the heterogeneity of consumer-grade recording devices, commonly used in nonclinical settings, impacts the reliability of these measurements. To address this issue, we systematically investigated the cross-device and test-retest reliability of classical speech acoustic measurements in a sample of healthy Chinese adults using consumer-grade equipment across three popular speech tasks: sustained phonation (SP), diadochokinesis (DDK), and picture description (PicD).

View Article and Find Full Text PDF

Objective: To improve performance of medical entity normalization across many languages, especially when fewer language resources are available compared to English.

Materials And Methods: We propose xMEN, a modular system for cross-lingual (x) medical entity normalization (MEN), accommodating both low- and high-resource scenarios. To account for the scarcity of aliases for many target languages and terminologies, we leverage multilingual aliases via cross-lingual candidate generation.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!