Background: Individuals suffering from vision loss of a peripheral origin may learn to understand spoken language at a rate of up to about 22 syllables (syl) per second - exceeding by far the maximum performance level of normal-sighted listeners (ca. 8 syl/s). To further elucidate the brain mechanisms underlying this extraordinary skill, functional magnetic resonance imaging (fMRI) was performed in blind subjects of varying ultra-fast speech comprehension capabilities and sighted individuals while listening to sentence utterances of a moderately fast (8 syl/s) or ultra-fast (16 syl/s) syllabic rate.
Results: Besides left inferior frontal gyrus (IFG), bilateral posterior superior temporal sulcus (pSTS) and left supplementary motor area (SMA), blind people highly proficient in ultra-fast speech perception showed significant hemodynamic activation of right-hemispheric primary visual cortex (V1), contralateral fusiform gyrus (FG), and bilateral pulvinar (Pv).
Conclusions: Presumably, FG supports the left-hemispheric perisylvian "language network", i.e., IFG and superior temporal lobe, during the (segmental) sequencing of verbal utterances whereas the collaboration of bilateral pulvinar, right auditory cortex, and ipsilateral V1 implements a signal-driven timing mechanism related to syllabic (suprasegmental) modulation of the speech signal. These data structures, conveyed via left SMA to the perisylvian "language zones", might facilitate - under time-critical conditions - the consolidation of linguistic information at the level of verbal working memory.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3847124 | PMC |
http://dx.doi.org/10.1186/1471-2202-14-74 | DOI Listing |
Sci Rep
May 2024
UCL Centre for Translational Cardiovascular Imaging, University College London, 30 Guilford St, London, WC1N 1EH, UK.
To develop and assess a deep learning (DL) pipeline to learn dynamic MR image reconstruction from publicly available natural videos (Inter4K). Learning was performed for a range of DL architectures (VarNet, 3D UNet, FastDVDNet) and corresponding sampling patterns (Cartesian, radial, spiral) either from true multi-coil cardiac MR data (N = 692) or from synthetic MR data simulated from Inter4K natural videos (N = 588). Real-time undersampled dynamic MR images were reconstructed using DL networks trained with cardiac data and natural videos, and compressed sensing (CS).
View Article and Find Full Text PDFNat Hum Behav
August 2022
School of Psychology, Shenzhen University, Shenzhen, China.
Human neonates can discriminate phonemes, but the neural mechanism underlying this ability is poorly understood. Here we show that the neonatal brain can learn to discriminate natural vowels from backward vowels, a contrast unlikely to have been learnt in the womb. Using functional near-infrared spectroscopy, we examined the neuroplastic changes caused by 5 h of postnatal exposure to random sequences of natural and reversed (backward) vowels (T1), and again 2 h later (T2).
View Article and Find Full Text PDFJ Acoust Soc Am
September 2021
Laboratoire des Systèmes Perceptifs, Département d'études Cognitives, École Normale Supérieure, PSL University, CNRS, 29 Rue d'Ulm, 75005 Paris, France.
Learning about new sounds is essential for cochlear-implant and normal-hearing listeners alike, with the additional challenge for implant listeners that spectral resolution is severely degraded. Here, a task measuring the rapid learning of slow or fast stochastic temporal sequences [Kang, Agus, and Pressnitzer (2017). J.
View Article and Find Full Text PDFPLoS One
April 2016
Department of General Neurology, Hertie Institute for Clinical Brain Research, Center for Neurology, University of Tübingen, Hoppe-Seyler-Str. 3, D-72076 Tübingen, Germany.
In many functional magnetic resonance imaging (fMRI) studies blind humans were found to show cross-modal reorganization engaging the visual system in non-visual tasks. For example, blind people can manage to understand (synthetic) spoken language at very high speaking rates up to ca. 20 syllables/s (syl/s).
View Article and Find Full Text PDFSci Rep
May 2015
1] Eaton-Peabody Laboratories, Massachusetts Eye and Ear Infirmary, Boston MA 02114 [2] Center for Computational Neuroscience and Neural Technology, Boston University, Boston, Massachusetts 02215 [3] Department of Otology and Laryngology, HMS, Boston MA, 02114.
Optogenetics provides a means to dissect the organization and function of neural circuits. Optogenetics also offers the translational promise of restoring sensation, enabling movement or supplanting abnormal activity patterns in pathological brain circuits. However, the inherent sluggishness of evoked photocurrents in conventional channelrhodopsins has hampered the development of optoprostheses that adequately mimic the rate and timing of natural spike patterning.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!