Hearing-impaired listeners struggle to understand speech in noise, even when using cochlear implants (CIs) or hearing aids. Successful listening in noisy environments depends on the brain's ability to organize a mixture of sound sources into distinct perceptual streams (i.e.
View Article and Find Full Text PDFPurpose: Frequency selectivity is a fundamental property of the peripheral auditory system; however, the invasiveness of auditory nerve (AN) experiments limits its study in the human ear. Compound action potentials (CAPs) associated with forward masking have been suggested as an alternative to assess cochlear frequency selectivity. Previous methods relied on an empirical comparison of AN and CAP tuning curves in animal models, arguably not taking full advantage of the information contained in forward-masked CAP waveforms.
View Article and Find Full Text PDFBackground: Disabling hearing loss affects nearly 466 million people worldwide (World Health Organization). The auditory brainstem response (ABR) is the most common non-invasive clinical measure of evoked potentials, e.g.
View Article and Find Full Text PDFNeurophysiological studies suggest that intrinsic brain oscillations influence sensory processing, especially of rhythmic stimuli like speech. Prior work suggests that brain rhythms may mediate perceptual grouping and selective attention to speech amidst competing sound, as well as more linguistic aspects of speech processing like predictive coding. However, we know of no prior studies that have directly tested, at the single-trial level, whether brain oscillations relate to speech-in-noise outcomes.
View Article and Find Full Text PDFNeurophysiological studies suggest that intrinsic brain oscillations influence sensory processing, especially of rhythmic stimuli like speech. Prior work suggests that brain rhythms may mediate perceptual grouping and selective attention to speech amidst competing sound, as well as more linguistic aspects of speech processing like predictive coding. However, we know of no prior studies that have directly tested, at the single-trial level, whether brain oscillations relate to speech-in-noise outcomes.
View Article and Find Full Text PDFListeners with sensorineural hearing loss (SNHL) have substantial perceptual deficits, especially in noisy environments. Unfortunately, speech-intelligibility models have limited success in predicting the performance of listeners with hearing loss. A better understanding of the various suprathreshold factors that contribute to neural-coding degradations of speech in noisy conditions will facilitate better modeling and clinical outcomes.
View Article and Find Full Text PDFAnimal models suggest that cochlear afferent nerve endings may be more vulnerable than sensory hair cells to damage from acoustic overexposure and aging. Because neural degeneration without hair-cell loss cannot be detected in standard clinical audiometry, whether such damage occurs in humans is hotly debated. Here, we address this debate through co-ordinated experiments in at-risk humans and a wild-type chinchilla model.
View Article and Find Full Text PDFListeners with sensorineural hearing loss (SNHL) struggle to understand speech, especially in noise, despite audibility compensation. These real-world suprathreshold deficits are hypothesized to arise from degraded frequency tuning and reduced temporal-coding precision; however, peripheral neurophysiological studies testing these hypotheses have been largely limited to in-quiet artificial vowels. Here, we measured single auditory-nerve-fiber responses to a connected speech sentence in noise from anesthetized male chinchillas with normal hearing (NH) or noise-induced hearing loss (NIHL).
View Article and Find Full Text PDFA difference in fundamental frequency (F0) between two vowels is an important segregation cue prior to identifying concurrent vowels. To understand the effects of this cue on identification due to age and hearing loss, Chintanpalli, Ahlstrom, and Dubno [(2016). J.
View Article and Find Full Text PDFTemporal coherence of sound fluctuations across spectral channels is thought to aid auditory grouping and scene segregation. Although prior studies on the neural bases of temporal-coherence processing focused mostly on cortical contributions, neurophysiological evidence suggests that temporal-coherence-based scene analysis may start as early as the cochlear nucleus (i.e.
View Article and Find Full Text PDFTo understand the mechanisms of speech perception in everyday listening environments, it is important to elucidate the relative contributions of different acoustic cues in transmitting phonetic content. Previous studies suggest that the envelope of speech in different frequency bands conveys most speech content, while the temporal fine structure (TFS) can aid in segregating target speech from background noise. However, the role of TFS in conveying phonetic content beyond what envelopes convey for intact speech in complex acoustic scenes is poorly understood.
View Article and Find Full Text PDFA fundamental question in the neuroscience of everyday communication is how scene acoustics shape the neural processing of attended speech sounds and in turn impact speech intelligibility. While it is well known that the temporal envelopes in target speech are important for intelligibility, how the neural encoding of target-speech envelopes is influenced by background sounds or other acoustic features of the scene is unknown. Here, we combine human electroencephalography with simultaneous intelligibility measurements to address this key gap.
View Article and Find Full Text PDFSignificant scientific and translational questions remain in auditory neuroscience surrounding the neural correlates of perception. Relating perceptual and neural data collected from humans can be useful; however, human-based neural data are typically limited to evoked far-field responses, which lack anatomical and physiological specificity. Laboratory-controlled preclinical animal models offer the advantage of comparing single-unit and evoked responses from the same animals.
View Article and Find Full Text PDFJ Assoc Res Otolaryngol
February 2021
Animal models of noise-induced hearing loss (NIHL) show a dramatic mismatch between cochlear characteristic frequency (CF, based on place of innervation) and the dominant response frequency in single auditory-nerve-fiber responses to broadband sounds (i.e., distorted tonotopy, DT).
View Article and Find Full Text PDFThe chinchilla animal model for noise-induced hearing loss has an extensive history spanning more than 50 years. Many behavioral, anatomical, and physiological characteristics of the chinchilla make it a valuable animal model for hearing science. These include similarities with human hearing frequency and intensity sensitivity, the ability to be trained behaviorally with acoustic stimuli relevant to human hearing, a docile nature that allows many physiological measures to be made in an awake state, physiological robustness that allows for data to be collected from all levels of the auditory system, and the ability to model various types of conductive and sensorineural hearing losses that mimic pathologies observed in humans.
View Article and Find Full Text PDFSpeech intelligibility can vary dramatically between individuals with similar clinically defined severity of hearing loss based on the audiogram. These perceptual differences, despite equal audiometric-threshold elevation, are often assumed to reflect central-processing variations. Here, we compared peripheral-processing in auditory nerve (AN) fibers of male chinchillas between two prevalent hearing loss etiologies: metabolic hearing loss (MHL) and noise-induced hearing loss (NIHL).
View Article and Find Full Text PDFThe relative importance of neural temporal and place coding in auditory perception is still a matter of much debate. The current article is a compilation of viewpoints from leading auditory psychophysicists and physiologists regarding the upper frequency limit for the use of neural phase locking to code temporal fine structure in humans. While phase locking is used for binaural processing up to about 1500 Hz, there is disagreement regarding the use of monaural phase-locking information at higher frequencies.
View Article and Find Full Text PDFStudies in multiple species, including in post-mortem human tissue, have shown that normal aging and/or acoustic overexposure can lead to a significant loss of afferent synapses innervating the cochlea. Hypothetically, this cochlear synaptopathy can lead to perceptual deficits in challenging environments and can contribute to central neural effects such as tinnitus. However, because cochlear synaptopathy can occur without any measurable changes in audiometric thresholds, synaptopathy can remain hidden from standard clinical diagnostics.
View Article and Find Full Text PDFWhen presented with two vowels simultaneously, humans are often able to identify the constituent vowels. Computational models exist that simulate this ability, however they predict listener confusions poorly, particularly in the case where the two vowels have the same fundamental frequency. Presented here is a model that is uniquely able to predict the combined representation of concurrent vowels.
View Article and Find Full Text PDFSensitivity to interaural time differences (ITDs) in envelope and temporal fine structure (TFS) of amplitude-modulated (AM) tones was assessed for young and older subjects, all with clinically normal hearing at the carrier frequencies of 250 and 500 Hz. Some subjects had hearing loss at higher frequencies. In experiment 1, thresholds for detecting changes in ITD were measured when the ITD was present in the TFS alone (ITD), the envelope alone (ITD), or both (ITD).
View Article and Find Full Text PDFAn estimate of lifetime noise exposure was used as the primary predictor of performance on a range of behavioral tasks: frequency and intensity difference limens, amplitude modulation detection, interaural phase discrimination, the digit triplet speech test, the co-ordinate response speech measure, an auditory localization task, a musical consonance task and a subjective report of hearing ability. One hundred and thirty-eight participants (81 females) aged 18-36 years were tested, with a wide range of self-reported noise exposure. All had normal pure-tone audiograms up to 8 kHz.
View Article and Find Full Text PDFUnderstanding the biology of the previously underappreciated sensitivity of cochlear synapses to noise insult, and its clinical consequences, is becoming a mission for a growing number of auditory researchers. In addition, several research groups have become interested in developing therapeutic approaches that can reverse synaptopathy and restore hearing function. One of the major challenges to realizing the potential of synaptopathy rodent models is that current clinical audiometric approaches cannot yet reveal the presence of this subtle cochlear pathology in humans.
View Article and Find Full Text PDFNoise-induced cochlear synaptopathy has been demonstrated in numerous rodent studies. In these animal models, the disorder is characterized by a reduction in amplitude of wave I of the auditory brainstem response (ABR) to high-level stimuli, whereas the response at threshold is unaffected. The aim of the present study was to determine if this disorder is prevalent in young adult humans with normal audiometric hearing.
View Article and Find Full Text PDFThe compressive nonlinearity of cochlear signal transduction, reflecting outer-hair-cell function, manifests as suppressive spectral interactions; e.g., two-tone suppression.
View Article and Find Full Text PDF