Ultra-fast speech comprehension in blind subjects engages primary visual cortex, fusiform gyrus, and pulvinar - a functional magnetic resonance imaging (fMRI) study.

BMC Neurosci

Center for Neurology/Department of General Neurology, Hertie Institute for Clinical Brain Research, University of Tübingen, Hoppe-Seyler-Str, 3, D-72076, Tübingen, Germany.

Published: July 2013

Background: Individuals suffering from vision loss of a peripheral origin may learn to understand spoken language at a rate of up to about 22 syllables (syl) per second - exceeding by far the maximum performance level of normal-sighted listeners (ca. 8 syl/s). To further elucidate the brain mechanisms underlying this extraordinary skill, functional magnetic resonance imaging (fMRI) was performed in blind subjects of varying ultra-fast speech comprehension capabilities and sighted individuals while listening to sentence utterances of a moderately fast (8 syl/s) or ultra-fast (16 syl/s) syllabic rate.

Results: Besides left inferior frontal gyrus (IFG), bilateral posterior superior temporal sulcus (pSTS) and left supplementary motor area (SMA), blind people highly proficient in ultra-fast speech perception showed significant hemodynamic activation of right-hemispheric primary visual cortex (V1), contralateral fusiform gyrus (FG), and bilateral pulvinar (Pv).

Conclusions: Presumably, FG supports the left-hemispheric perisylvian "language network", i.e., IFG and superior temporal lobe, during the (segmental) sequencing of verbal utterances whereas the collaboration of bilateral pulvinar, right auditory cortex, and ipsilateral V1 implements a signal-driven timing mechanism related to syllabic (suprasegmental) modulation of the speech signal. These data structures, conveyed via left SMA to the perisylvian "language zones", might facilitate - under time-critical conditions - the consolidation of linguistic information at the level of verbal working memory.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3847124PMC
http://dx.doi.org/10.1186/1471-2202-14-74DOI Listing

Publication Analysis

Top Keywords

ultra-fast speech
12
speech comprehension
8
blind subjects
8
primary visual
8
visual cortex
8
fusiform gyrus
8
functional magnetic
8
magnetic resonance
8
resonance imaging
8
imaging fmri
8

Similar Publications

To develop and assess a deep learning (DL) pipeline to learn dynamic MR image reconstruction from publicly available natural videos (Inter4K). Learning was performed for a range of DL architectures (VarNet, 3D UNet, FastDVDNet) and corresponding sampling patterns (Cartesian, radial, spiral) either from true multi-coil cardiac MR data (N = 692) or from synthetic MR data simulated from Inter4K natural videos (N = 588). Real-time undersampled dynamic MR images were reconstructed using DL networks trained with cardiac data and natural videos, and compressed sensing (CS).

View Article and Find Full Text PDF

Human neonates can discriminate phonemes, but the neural mechanism underlying this ability is poorly understood. Here we show that the neonatal brain can learn to discriminate natural vowels from backward vowels, a contrast unlikely to have been learnt in the womb. Using functional near-infrared spectroscopy, we examined the neuroplastic changes caused by 5 h of postnatal exposure to random sequences of natural and reversed (backward) vowels (T1), and again 2 h later (T2).

View Article and Find Full Text PDF

Auditory memory for random time patterns in cochlear implant listeners.

J Acoust Soc Am

September 2021

Laboratoire des Systèmes Perceptifs, Département d'études Cognitives, École Normale Supérieure, PSL University, CNRS, 29 Rue d'Ulm, 75005 Paris, France.

Learning about new sounds is essential for cochlear-implant and normal-hearing listeners alike, with the additional challenge for implant listeners that spectral resolution is severely degraded. Here, a task measuring the rapid learning of slow or fast stochastic temporal sequences [Kang, Agus, and Pressnitzer (2017). J.

View Article and Find Full Text PDF

Network Modeling for Functional Magnetic Resonance Imaging (fMRI) Signals during Ultra-Fast Speech Comprehension in Late-Blind Listeners.

PLoS One

April 2016

Department of General Neurology, Hertie Institute for Clinical Brain Research, Center for Neurology, University of Tübingen, Hoppe-Seyler-Str. 3, D-72076 Tübingen, Germany.

In many functional magnetic resonance imaging (fMRI) studies blind humans were found to show cross-modal reorganization engaging the visual system in non-visual tasks. For example, blind people can manage to understand (synthetic) spoken language at very high speaking rates up to ca. 20 syllables/s (syl/s).

View Article and Find Full Text PDF

Hearing the light: neural and perceptual encoding of optogenetic stimulation in the central auditory pathway.

Sci Rep

May 2015

1] Eaton-Peabody Laboratories, Massachusetts Eye and Ear Infirmary, Boston MA 02114 [2] Center for Computational Neuroscience and Neural Technology, Boston University, Boston, Massachusetts 02215 [3] Department of Otology and Laryngology, HMS, Boston MA, 02114.

Optogenetics provides a means to dissect the organization and function of neural circuits. Optogenetics also offers the translational promise of restoring sensation, enabling movement or supplanting abnormal activity patterns in pathological brain circuits. However, the inherent sluggishness of evoked photocurrents in conventional channelrhodopsins has hampered the development of optoprostheses that adequately mimic the rate and timing of natural spike patterning.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!