Sounds activate occipital regions in early blind individuals. However, how different sound categories map onto specific regions of the occipital cortex remains a matter of debate. We used fMRI to characterize brain responses of early blind and sighted individuals to familiar object sounds, human voices, and their respective low-level control sounds. In addition, sighted participants were tested while viewing pictures of faces, objects, and phase-scrambled control pictures. In both early blind and sighted, a double dissociation was evidenced in bilateral auditory cortices between responses to voices and object sounds: Voices elicited categorical responses in bilateral superior temporal sulci, whereas object sounds elicited categorical responses along the lateral fissure bilaterally, including the primary auditory cortex and planum temporale. Outside the auditory regions, object sounds also elicited categorical responses in the left lateral and in the ventral occipitotemporal regions in both groups. These regions also showed response preference for images of objects in the sighted group, thus suggesting a functional specialization that is independent of sensory input and visual experience. Between-group comparisons revealed that, only in the blind group, categorical responses to object sounds extended more posteriorly into the occipital cortex. Functional connectivity analyses evidenced a selective increase in the functional coupling between these reorganized regions and regions of the ventral occipitotemporal cortex in the blind group. In contrast, vocal sounds did not elicit preferential responses in the occipital cortex in either group. Nevertheless, enhanced voice-selective connectivity between the left temporal voice area and the right fusiform gyrus were found in the blind group. Altogether, these findings suggest that, in the absence of developmental vision, separate auditory categories are not equipotent in driving selective auditory recruitment of occipitotemporal regions and highlight the presence of domain-selective constraints on the expression of cross-modal plasticity.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1162/jocn_a_01186 | DOI Listing |
Microsyst Nanoeng
December 2024
Department of Mechanical Engineering, University of California, Berkeley, CA, 94720, USA.
This work presents air-coupled piezoelectric micromachined ultrasonic transducers (pMUTs) with high sound pressure level (SPL) under low-driving voltages by utilizing sputtered potassium sodium niobate KNaNbO (KNN) films. A prototype single KNN pMUT has been tested to show a resonant frequency at 106.3 kHz under 4 V with outstanding characteristics: (1) a large vibration amplitude of 3.
View Article and Find Full Text PDFMicrosyst Nanoeng
December 2024
ECE Department, University of Alberta, 9211-116 St. NW, Edmonton, T6G 1H9, AB, Canada.
Optomechanical sensors provide a platform for probing acoustic/vibrational properties at the micro-scale. Here, we used cavity optomechanical sensors to interrogate the acoustic environment of adjacent air bubbles in water. We report experimental observations of the volume acoustic modes of these bubbles, including both the fundamental Minnaert breathing mode and a family of higher-order modes extending into the megahertz frequency range.
View Article and Find Full Text PDFJ Speech Lang Hear Res
December 2024
Escuela de Gobierno, Universidad Torcuato Di Tella, Buenos Aires, Argentina.
Purpose: Children with hearing loss (CHL) who use hearing devices (cochlear implants or hearing aids) and communicate orally have trouble comprehending sentences with noncanonical order. This study explores sentence comprehension strategies in Spanish-speaking CHL, focusing on their ability to integrate morphosyntactic cues (word order, morphological case marking) with verbs differing in their syntax-to-semantics configuration.
Method: Fifty-eight Spanish-speaking CHL and 58 children with typical hearing (CTH) with a hearing age of 3;5-7;8 (years;months; i.
J Neurodev Disord
December 2024
Center for Neurocognitive Research (MEG Center), Moscow State University of Psychology and Education, Moscow, Russian Federation.
Background: Difficulties with speech-in-noise perception in autism spectrum disorders (ASD) may be associated with impaired analysis of speech sounds, such as vowels, which represent the fundamental phoneme constituents of human speech. Vowels elicit early (< 100 ms) sustained processing negativity (SPN) in the auditory cortex that reflects the detection of an acoustic pattern based on the presence of formant structure and/or periodic envelope information (f0) and its transformation into an auditory "object".
Methods: We used magnetoencephalography (MEG) and individual brain models to investigate whether SPN is altered in children with ASD and whether this deficit is associated with impairment in their ability to perceive speech in the background of noise.
J Acoust Soc Am
December 2024
Department of Natural Sciences, Université du Quebec en Outaouais, Gatineau, Quebec, Canada.
The endangered beluga whale (Delphinapterus leucas) of the St. Lawrence Estuary (SLEB) faces threats from a variety of anthropogenic factors. Since belugas are a highly social and vocal species, passive acoustic monitoring has the potential to deliver, in a non-invasive and continuous way, real-time information on SLEB spatiotemporal habitat use, which is crucial for their monitoring and conservation.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!