Brain structure predicts the learning of foreign speech sounds.

Cereb Cortex

Unité INSERM 562, Service Hospitalier Frédéric Joliot, CEA/DRM/DSV 4 Place du general Leclerc, 91401 Orsay cedex, France.

Published: March 2007

Previous work has shown a relationship between parietal lobe anatomy and nonnative speech sound learning. We scanned a new group of phonetic learners using structural magnetic resonance imaging and diffusion tensor imaging. Voxel-based morphometry indicated higher white matter (WM) density in left Heschl's gyrus (HG) in faster compared with slower learners, and manual segmentation of this structure confirmed that the WM volume of left HG is larger in the former compared with the latter group. This finding was replicated in a reanalysis of the original groups tested in Golestani and others (2002, Anatomical correlates of learning novel speech sounds. Neuron 35:997-1010). We also found that faster learners have a greater asymmetry (left > right) in parietal lobe volumes than slower learners and that the right insula and HG are more superiorly located in slower compared with faster learners. These results suggest that left auditory cortex WM anatomy, which likely reflects auditory processing efficiency, partly predicts individual differences in an aspect of language learning that relies on rapid temporal processing. It also appears that a global displacement of components of a right hemispheric language network, possibly reflecting individual differences in the functional anatomy and lateralization of language processing, is predictive of speech sound learning.

Download full-text PDF

Source
http://dx.doi.org/10.1093/cercor/bhk001DOI Listing

Publication Analysis

Top Keywords

speech sounds
8
parietal lobe
8
speech sound
8
sound learning
8
slower learners
8
faster learners
8
individual differences
8
learning
5
learners
5
brain structure
4

Similar Publications

Neural correlates of perceptual plasticity in the auditory midbrain and thalamus.

J Neurosci

January 2025

Neuroscience and Cognitive Science Program, University of Maryland, College Park, Maryland, 20742.

Hearing is an active process in which listeners must detect and identify sounds, segregate and discriminate stimulus features, and extract their behavioral relevance. Adaptive changes in sound detection can emerge rapidly, during sudden shifts in acoustic or environmental context, or more slowly as a result of practice. Although we know that context- and learning-dependent changes in the sensitivity of auditory cortical (ACX) neurons support many aspects of perceptual plasticity, the contribution of subcortical auditory regions to this process is less understood.

View Article and Find Full Text PDF

Audio-visual concert performances synchronize audience's heart rates.

Ann N Y Acad Sci

January 2025

Department of Neuropsychology and Psychopharmacology, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands.

People enjoy engaging with music. Live music concerts provide an excellent option to investigate real-world music experiences, and at the same time, use neurophysiological synchrony to assess dynamic engagement. In the current study, we assessed engagement in a live concert setting using synchrony of cardiorespiratory measures, comparing inter-subject, stimulus-response, correlation, and phase coherence.

View Article and Find Full Text PDF

Sparse high-dimensional decomposition of non-primary auditory cortical receptive fields.

PLoS Comput Biol

January 2025

Department of Electrical and Computer Engineering, University of Maryland, College Park, Maryland, United States of America.

Characterizing neuronal responses to natural stimuli remains a central goal in sensory neuroscience. In auditory cortical neurons, the stimulus selectivity of elicited spiking activity is summarized by a spectrotemporal receptive field (STRF) that relates neuronal responses to the stimulus spectrogram. Though effective in characterizing primary auditory cortical responses, STRFs of non-primary auditory neurons can be quite intricate, reflecting their mixed selectivity.

View Article and Find Full Text PDF

Stress classification with in-ear heartbeat sounds.

Comput Biol Med

December 2024

École de technologie supérieure, 1100 Notre-Dame St W, Montreal, H3C 1K3, Quebec, Canada; Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT), 527 Rue Sherbrooke O #8, Montréal, QC H3A 1E3, Canada. Electronic address:

Background: Although stress plays a key role in tinnitus and decreased sound tolerance, conventional hearing devices used to manage these conditions are not currently capable of monitoring the wearer's stress level. The aim of this study was to assess the feasibility of stress monitoring with an in-ear device.

Method: In-ear heartbeat sounds and clinical-grade electrocardiography (ECG) signals were simultaneously recorded while 30 healthy young adults underwent a stress protocol.

View Article and Find Full Text PDF

Probing Sensorimotor Memory through the Human Speech-Audiomotor System.

J Neurophysiol

December 2024

Yale Child Study Center, Yale School of Medicine, Yale University, New Haven, CT, USA.

Our knowledge of human sensorimotor learning and memory is predominantly based on the visuo-spatial workspace and limb movements. Humans also have a remarkable ability to produce and perceive speech sounds. We asked if the human speech-auditory system could serve as a model to characterize retention of sensorimotor memory in a workspace which is functionally independent of the visuo-spatial one.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!