Cochlear implantation is a well-established method for restoring hearing sensation in individuals with severe to profound hearing loss. It significantly improves verbal communication for many users, despite substantial variability in patients' reports and performance on speech perception tests and quality-of-life outcome measures. Such variability in outcome measures remains several years after implantation and could reflect difficulties in attentional regulation.
View Article and Find Full Text PDFBackground: A cochlear implant (CI) enables deaf people to understand speech but due to technical restrictions, users face great limitations in noisy conditions. Music training has been shown to augment shared auditory and cognitive neural networks for processing speech and music and to improve auditory-motor coupling, which benefits speech perception in noisy listening conditions. These are promising prerequisites for studying multi-modal neurologic music training (NMT) for speech-in-noise (SIN) perception in adult cochlear implant (CI) users.
View Article and Find Full Text PDFLanguage comprehension is a complex process involving an extensive brain network. Brain regions responsible for prosodic processing have been studied in adults; however, much less is known about the neural bases of prosodic processing in children. Using magnetoencephalography (MEG), we mapped regions supporting speech envelope tracking (a marker of prosodic processing) in 80 typically developing children, ages 4-18 years, completing a stories listening paradigm.
View Article and Find Full Text PDFBackground: The cochlear implant (CI) has proven to be a successful treatment for patients with severe-to-profound sensorineural hearing loss, however outcome variance exists. We sought to evaluate particular mutations discovered in previously established sensory and neural partition genes and compare post-operative CI outcomes.
Materials And Methods: Utilizing a prospective cohort study design, blood samples collected from adult patients with non-syndromic hearing loss undergoing CI were tested for 54 genes of interest with high-throughput sequencing.
There is a weak relationship between clinical and self-reported speech perception outcomes in cochlear implant (CI) listeners. Such poor correspondence may be due to differences in clinical and "real-world" listening environments and stimuli. Speech in the real world is often accompanied by visual cues, background environmental noise, and is generally in a conversational context, all factors that could affect listening demand.
View Article and Find Full Text PDFDeaf individuals who use a cochlear implant (CI) have remarkably different outcomes for auditory speech communication ability. One factor assumed to affect CI outcomes is visual crossmodal plasticity in auditory cortex, where deprived auditory regions begin to support non-auditory functions such as vision. Previous research has viewed crossmodal plasticity as harmful for speech outcomes for CI users if it interferes with sound processing, while others have demonstrated that plasticity related to visual language may be beneficial for speech recovery.
View Article and Find Full Text PDFObjective: Evidence suggests that hearing loss increases the risk of cognitive impairment. However, the relationship between hearing loss and cognition can vary considerably across studies, which may be partially explained by demographic and health factors that are not systematically accounted for in statistical models.
Design: Middle-aged to older adult participants (N = 149) completed a web-based assessment that included speech-in-noise (SiN) and self-report measures of hearing, as well as auditory and visual cognitive interference (Stroop) tasks.
Moderate noise exposure may cause acute loss of cochlear synapses without affecting the cochlear hair cells and hearing threshold; thus, it remains "hidden" to standard clinical tests. This cochlear synaptopathy is one of the main pathologies of noise-induced hearing loss (NIHL). There is no effective treatment for NIHL, mainly because of the lack of a proper drug-delivery technique.
View Article and Find Full Text PDFListening to speech in noise is effortful for individuals with hearing loss, even if they have received a hearing prosthesis such as a hearing aid or cochlear implant (CI). At present, little is known about the neural functions that support listening effort. One form of neural activity that has been suggested to reflect listening effort is the power of 8-12 Hz (alpha) oscillations measured by electroencephalography (EEG).
View Article and Find Full Text PDFA common concern for individuals with severe-to-profound hearing loss fitted with cochlear implants (CIs) is difficulty following conversations in noisy environments. Recent work has suggested that these difficulties are related to individual differences in brain function, including verbal working memory and the degree of cross-modal reorganization of auditory areas for visual processing. However, the neural basis for these relationships is not fully understood.
View Article and Find Full Text PDFHearing impairment disrupts processes of selective attention that help listeners attend to one sound source over competing sounds in the environment. Hearing prostheses (hearing aids and cochlear implants, CIs), do not fully remedy these issues. In normal hearing, mechanisms of selective attention arise through the facilitation and suppression of neural activity that represents sound sources.
View Article and Find Full Text PDFObjectives: The ability to understand speech is highly variable in people with cochlear implants (CIs) and to date, there are no objective measures that identify the root of this discrepancy. However, behavioral measures of temporal processing such as the temporal modulation transfer function (TMTF) has previously found to be related to vowel and consonant identification in CI users. The acoustic change complex (ACC) is a cortical auditory-evoked potential response that can be elicited by a "change" in an ongoing stimulus.
View Article and Find Full Text PDFListening in a noisy environment is challenging for individuals with normal hearing and can be a significant burden for those with hearing impairment. The extent to which this burden is alleviated by a hearing device is a major, unresolved issue for rehabilitation. Here, we found adult users of cochlear implants (CIs) self-reported listening effort during a speech-in-noise task that was positively related to alpha oscillatory activity in the left inferior frontal cortex, canonical Broca's area, and inversely related to speech envelope coherence in the 2-5 Hz range originating in the superior-temporal plane encompassing auditory cortex.
View Article and Find Full Text PDFUnderstanding speech in noise (SiN) is a complex task involving sensory encoding and cognitive resources including working memory and attention. Previous work has shown that brain oscillations, particularly alpha rhythms (8-12 Hz) play important roles in sensory processes involving working memory and attention. However, no previous study has examined brain oscillations during performance of a continuous speech perception test.
View Article and Find Full Text PDFObjective: To record envelope following responses (EFRs) to monaural amplitude-modulated broadband noise carriers in which amplitude modulation (AM) depth was slowly changed over time and to compare these objective electrophysiological measures to subjective behavioral thresholds in young normal hearing and older subjects.
Participants: three groups of subjects included a young normal-hearing group (YNH 18 to 28 years; pure-tone average = 5 dB HL), a first older group ("O1"; 41 to 62 years; pure-tone average = 19 dB HL), and a second older group ("O2"; 67 to 82 years; pure-tone average = 35 dB HL). Electrophysiology: In condition 1, the AM depth (41 Hz) of a white noise carrier, was continuously varied from 2% to 100% (5%/s).
Objective: Voice onset time (VOT) is a critical temporal cue for perception of speech in cochlear implant (CI) users. We assessed the cortical auditory evoked potentials (CAEPs) to consonant vowels (CVs) with varying VOTs and related these potentials to various speech perception measures.
Methods: CAEPs were recorded from 64 scalp electrodes during passive listening in CI and normal-hearing (NH) groups.
Using noninvasive neuroimaging, researchers have shown that young children have bilateral and diffuse language networks, which become increasingly left lateralized and focal with development. Connectivity within the distributed pediatric language network has been minimally studied, and conventional neuroimaging approaches do not distinguish task-related signal changes from those that are task essential. In this study, we propose a novel multimodal method to map core language sites from patterns of information flux.
View Article and Find Full Text PDFObjective: Sound modulation is a critical temporal cue for the perception of speech and environmental sounds. To examine auditory cortical responses to sound modulation, we developed an acoustic change stimulus involving amplitude modulation (AM) of ongoing noise. The AM transitions in this stimulus evoked an acoustic change complex (ACC) that was examined parametrically in terms of rate and depth of modulation and hemispheric symmetry.
View Article and Find Full Text PDFThere have been a number of studies suggesting that oscillatory alpha activity (~10 Hz) plays a pivotal role in attention by gating information flow to relevant sensory regions. The vast majority of these studies have looked at shifts of attention in the spatial domain and only in a single modality (often visual or sensorimotor). In the current magnetoencephalography (MEG) study, we investigated the role of alpha activity in the suppression of a distracting modality stream.
View Article and Find Full Text PDFAbnormal auditory adaptation is a standard clinical tool for diagnosing auditory nerve disorders due to acoustic neuromas. In the present study we investigated auditory adaptation in auditory neuropathy owing to disordered function of inner hair cell ribbon synapses (temperature-sensitive auditory neuropathy) or auditory nerve fibres. Subjects were tested when afebrile for (i) psychophysical loudness adaptation to comfortably-loud sustained tones; and (ii) physiological adaptation of auditory brainstem responses to clicks as a function of their position in brief 20-click stimulus trains (#1, 2, 3 … 20).
View Article and Find Full Text PDFObjective: Compare brain potentials to consonant vowels (CVs) as a function of both voice onset times (VOTs) and consonant position; initial (CV) versus second (VCV).
Methods: Auditory cortical potentials (N100, P200, N200, and a late slow negativity, (SN) were recorded from scalp electrodes in twelve normal hearing subjects to consonant vowels in initial position (CVs: /du/ and /tu/), in second position (VCVs: /udu/ and /utu/), and to vowels alone (V: /u/) and paired (VVs: /uu/) separated in time to simulate consonant voice onset times (VOTs).
Results: CVs evoked "acoustic onset" N100s of similar latency but larger amplitudes to /du/ than /tu/.
IEEE Trans Neural Syst Rehabil Eng
July 2012
Although the cochlear implant (CI) is widely considered the most successful neural prosthesis, it is essentially an open-loop system that requires extensive initial fitting and frequent tuning to maintain a high, but not necessarily optimal, level of performance. Two developments in neuroscience and neuroengineering now make it feasible to design a closed-loop CI. One development is the recording and interpretation of evoked potentials (EPs) from the peripheral to the central nervous system.
View Article and Find Full Text PDFTinnitus is a phantom sensation of sound in the absence of external stimulation. However, external stimulation, particularly electric stimulation via a cochlear implant, has been shown to suppress tinnitus. Different from traditional methods of delivering speech sounds or high-rate (>2000 Hz) stimulation, the present study found a unique unilaterally-deafened cochlear implant subject whose tinnitus was completely suppressed by a low-rate (<100 Hz) stimulus, delivered at a level softer than tinnitus to the apical part of the cochlea.
View Article and Find Full Text PDFObjectives: Auditory cortical N100s were examined in ten auditory neuropathy (AN) subjects as objective measures of impaired hearing.
Methods: Latencies and amplitudes of N100 in AN to increases of frequency (4-50%) or intensity (4-8 dB) of low (250 Hz) or high (4000 Hz) frequency tones were compared with results from normal-hearing controls. The sites of auditory nerve dysfunction were pre-synaptic (n=3) due to otoferlin mutations causing temperature sensitive deafness, post-synaptic (n=4) affecting other cranial and/or peripheral neuropathies, and undefined (n=3).