Auditory-nerve fibers (ANFs) from a given cochlear region can vary in threshold sensitivity by up to 60 dB, corresponding to a 1000-fold difference in stimulus level, although each fiber innervates a single inner hair cell (IHC) via a single synapse. ANFs with high-thresholds also have low spontaneous rates (SRs) and synapse on the side of the IHC closer to the modiolus, whereas the low-threshold, high-SR fibers synapse on the side closer to the pillar cells. Prior biophysical work has identified modiolar-pillar differences in both pre- and post-synaptic properties, but a comprehensive explanation for the wide range of sensitivities remains elusive.
View Article and Find Full Text PDFAuditory-nerve fibers (ANFs) from a given cochlear region can vary in threshold sensitivity by up to 60 dB, corresponding to a 1000-fold difference in stimulus level, although each fiber innervates a single inner hair cell (IHC) via a single synapse. ANFs with high-thresholds also have low spontaneous rates (SRs) and synapse on the side of the IHC closer to the modiolus, whereas the low-threshold, high-SR fibers synapse on the side closer to the pillar cells. Prior biophysical work has identified modiolar-pillar differences in both pre- and post-synaptic properties, but a comprehensive explanation for the wide range of sensitivities remains elusive.
View Article and Find Full Text PDFAuditory nerve (AN) fibers that innervate inner hair cells in the cochlea degenerate with advancing age. It has been proposed that age-related reductions in brainstem frequency-following responses (FFR) to the carrier of low-frequency, high-intensity pure tones may partially reflect this neural loss in the cochlea (Märcher-Rørsted et al., 2022).
View Article and Find Full Text PDFTemporal synchrony between facial motion and acoustic modulations is a hallmark feature of audiovisual speech. The moving face and mouth during natural speech is known to be correlated with low-frequency acoustic envelope fluctuations (below 10 Hz), but the precise rates at which envelope information is synchronized with motion in different parts of the face are less clear. Here, we used regularized canonical correlation analysis (rCCA) to learn speech envelope filters whose outputs correlate with motion in different parts of the speakers face.
View Article and Find Full Text PDF