Objective: The electrically-evoked stapedial reflex threshold (eSRT) has proven to be useful in setting upper stimulation levels of cochlear implant recipients. However, the literature suggests that the reflex can be difficult to observe in a significant percentage of the population. The primary goal of this investigation was to assess the difference in eSRT levels obtained with alternative acoustic admittance probe tone frequencies.
View Article and Find Full Text PDFObjectives: The goal of this study was to create and validate a new set of sentence lists that could be used to evaluate the speech-perception abilities of listeners with hearing loss in cases where adult materials are inappropriate due to difficulty level or content. The authors aimed to generate a large number of sentence lists with an equivalent level of difficulty for the evaluation of performance over time and across conditions.
Design: The original Pediatric AzBio sentence corpus included 450 sentences recorded from one female talker.
Objective: Spectral modulation detection (SMD) provides a psychoacoustic estimate of spectral resolution. The SMD threshold for an implanted ear is highly correlated with speech understanding and is thus a non-linguistic, psychoacoustic index of speech understanding. This measure, however, is time and equipment intensive and thus not practical for clinical use.
View Article and Find Full Text PDFIn a previous paper we reported the frequency selectivity, temporal resolution, nonlinear cochlear processing, and speech recognition in quiet and in noise for 5 listeners with normal hearing (mean age 24.2 years) and 17 older listeners (mean age 68.5 years) with bilateral, mild sloping to profound sensory hearing loss (Gifford et al.
View Article and Find Full Text PDFObjectives: The authors describe the localization and speech-understanding abilities of a patient fit with bilateral cochlear implants (CIs) for whom acoustic low-frequency hearing was preserved in both cochleae.
Design: Three signals were used in the localization experiments: low-pass, high-pass, and wideband noise. Speech understanding was assessed with the AzBio sentences presented in noise.
Objectives: Patients with a cochlear implant (CI) in one ear and a hearing aid in the other ear commonly achieve the highest speech-understanding scores when they have access to both electrically and acoustically stimulated information. At issue in this study was whether a measure of auditory function in the hearing aided ear would predict the benefit to speech understanding when the information from the aided ear was added to the information from the CI.
Design: The subjects were 22 bimodal listeners with a CI in one ear and low-frequency acoustic hearing in the nonimplanted ear.
Objectives: It was hypothesized that auditory training would allow bimodal patients to combine in a better manner the low-frequency acoustic information provided by a hearing aid with the electric information provided by a cochlear implant, thus maximizing the benefit of combining acoustic (A) and electric (E) stimulation (EAS).
Design: Performance in quiet or in the presence of a multitalker babble at +5 dB signal to noise ratio was evaluated in seven bimodal patients before and after auditory training. The performance measures comprised identification of vowels and consonants, consonant-nucleus-consonant words, sentences, voice gender, and emotion.
Objectives: The goal of this study was to create and validate a new set of sentence lists that could be used to evaluate the speech perception abilities of hearing-impaired listeners and cochlear implant (CI) users. Our intention was to generate a large number of sentence lists with an equivalent level of difficulty for the evaluation of performance over time and across conditions.
Design: The AzBio sentence corpus includes 1000 sentences recorded from two female and two male talkers.
Objectives: The aim of this study was to determine the minimum amount of low-frequency acoustic information that is required to achieve speech perception benefit in listeners with a cochlear implant in one ear and low-frequency hearing in the other ear.
Design: The recognition of monosyllabic words in quiet and sentences in noise was evaluated in three listening conditions: electric stimulation alone, acoustic stimulation alone, and combined electric and acoustic stimulation. The acoustic stimuli presented to the nonimplanted ear were either low-pass-filtered at 125, 250, 500, or 750 Hz, or unfiltered (wideband).
Objectives: Our aim was to assess, for patients with a cochlear implant in one ear and low-frequency acoustic hearing in the contralateral ear, whether reducing the overlap in frequencies conveyed in the acoustic signal and those analyzed by the cochlear implant speech processor would improve speech recognition.
Design: The recognition of monosyllabic words in quiet and sentences in noise was evaluated in three listening configurations: electric stimulation alone, acoustic stimulation alone, and combined electric and acoustic stimulation. The acoustic stimuli were either unfiltered or low-pass (LP) filtered at 250, 500, or 750 Hz.
Speech understanding by cochlear implant listeners may be limited by their ability to perceive complex spectral envelopes. Here, spectral envelope perception was characterized by spectral modulation transfer functions in which modulation detection thresholds became poorer with increasing spectral modulation frequency (SMF). Thresholds at low SMFs, less likely to be influenced by spectral resolution, were correlated with vowel and consonant identifications [Litvak, L.
View Article and Find Full Text PDFIn the newest implementation of cochlear implant surgery, electrode arrays of 10 or 20 mm are inserted into the cochlea with the aim of preserving hearing in the region apical to the tip of the electrode array. In the current study two measures were used to assess hearing preservation: changes in audiometric threshold and changes in psychophysical estimates of nonlinear cochlear processing. Nonlinear cochlear processing was evaluated at signal frequencies of 250 and 500 Hz using Schroeder phase maskers with various indices of masker phase curvature.
View Article and Find Full Text PDFPurpose: To determine why, in a pilot study, only 1 of 11 cochlear implant listeners was able to reliably identify a frequency-to-electrode map where the intervals of a familiar melody were played on the correct musical scale. The authors sought to validate their method and to assess the effect of pitch strength on musical scale recognition in normal-hearing listeners.
Method: Musical notes were generated as either sine waves or spectrally shaped noise bands, with a center frequency equal to that of a desired note and symmetrical (log-scale) reduction in amplitude away from the center frequency.
Fifteen patients fit with a cochlear implant in one ear and a hearing aid in the other ear were presented with tests of speech and melody recognition and voice discrimination under conditions of electric (E) stimulation, acoustic (A) stimulation and combined electric and acoustic stimulation (EAS). When acoustic information was added to electrically stimulated information performance increased by 17-23 percentage points on tests of word and sentence recognition in quiet and sentence recognition in noise. On average, the EAS patients achieved higher scores on CNC words than patients fit with a unilateral cochlear implant.
View Article and Find Full Text PDFPurpose: To compare the effects of conventional amplification (CA) and digital frequency compression (DFC) amplification on the speech recognition abilities of candidates for a partial-insertion cochlear implant, that is, candidates for combined electric and acoustic stimulation (EAS).
Method: The participants were 6 patients whose audiometric thresholds at 500 Hz and below were
Purpose: The authors assessed whether (a) a full-insertion cochlear implant would provide a higher level of speech understanding than bilateral low-frequency acoustic hearing, (b) contralateral acoustic hearing would add to the speech understanding provided by the implant, and (c) the level of performance achieved with electric stimulation plus contralateral acoustic hearing would be similar to performance reported in the literature for patients with a partial insertion cochlear implant.
Method: Monosyllabic word recognition as well as sentence recognition in quiet and at +10 and +5 dB was assessed. Before implantation, scores were obtained in monaural and binaural conditions.
Spectral resolution has been reported to be closely related to vowel and consonant recognition in cochlear implant (CI) listeners. One measure of spectral resolution is spectral modulation threshold (SMT), which is defined as the smallest detectable spectral contrast in the spectral ripple stimulus. SMT may be determined by the activation pattern associated with electrical stimulation.
View Article and Find Full Text PDFMost cochlear implant strategies utilize monopolar stimulation, likely inducing relatively broad activation of the auditory neurons. The spread of activity may be narrowed with a tripolar stimulation scheme, wherein compensating current of opposite polarity is simultaneously delivered to two adjacent electrodes. In this study, a model and cochlear implant subjects were used to examine loudness growth for varying amounts of tripolar compensation, parameterized by a coefficient sigma, ranging from 0 (monopolar) to 1 (full tripolar).
View Article and Find Full Text PDFObjective: To determine, for patients who had identical levels of performance on a monosyllabic word test presented in quiet, whether device differences would affect performance when tested with other materials and in other test conditions.
Design: For Experiment 1, from a test population of 76 patients, three groups (N = 13 in each group) were created. Patients in the first group used the CII Bionic Ear behind-the-ear (BTE) speech processor, patients in the second group used the Esprit3G BTE speech processor, and patients in the third group used the Tempo+ BTE speech processor.
Objective: For patients with relatively good low-frequency hearing and relatively poor high-frequency hearing, who met the pre-implant criteria for combined electric and acoustic stimulation (EAS), our aims were to i) assess deficits in low-frequency auditory function, ii) to identify measures which might be sensitive to changes resulting from the insertion of an intracochlear electrode array, and iii) to quantify the relationship between measures of auditory function and performance on tasks of speech and melody recognition.
Design: Measures of frequency selectivity, temporal resolution, and nonlinear cochlear function, along with measures of word, sentence, consonant, vowel, and melody recognition, were obtained from 5 normal-hearing and 17 hearing-impaired listeners. The hearing-impaired listeners had auditory thresholds at 500 Hz, ranging from 20 to 60 dB HL, and thresholds at 1 kHz, ranging from 60 to 100 dB HL.
Objective: The aim of this study was to assess the effects of variations in the settings for minimum stimulation levels on speech understanding for adult cochlear implant recipients using the Med El Tempo+ speech processor.
Design: Fifteen patients served as listeners. The test material included sentences presented at a conversational level in noise (74 dB SPL at +10 dB signal-to-noise ratio), sentences presented at a soft level in a quiet background (54 dB SPL), consonants in "vCv" environment (74 dB SPL re: vowel peaks), and synthetic vowels in "bVt" environment (54 dB SPL re: vowel peaks).
Objective: Our aim was to explore the consequences for speech understanding of leaving a gap in frequency between a region of acoustic hearing and a region stimulated electrically. Our studies were conducted with normal-hearing listeners, using an acoustic simulation of combined electric and acoustic (EAS) stimulation.
Design: Simulations of EAS were created by low-pass filtering speech at 0.
Arch Otolaryngol Head Neck Surg
May 2004
Objective: To determine if subjects who used different cochlear implant devices and who were matched on consonant-vowel-consonant (CNC) identification in quiet would show differences in performance on speech-based tests of spectral and temporal resolution, speech understanding in noise, or speech understanding at low sound levels.
Design: The performance of 15 subjects fit with the CII Bionic Ear System (CII Bionic Ear behind-the-ear speech processor with the Hi-Resolution sound processing strategy; Advanced Bionics Corporation) was compared with the performance of 15 subjects fit with the Nucleus 24 electrode array and ESPrit 3G behind-the-ear speech processor with the advanced combination encoder speech coding strategy (cochlear corporation).
Subjects: Thirty adults with late-onset deafness and above-average speech perception abilities who used cochlear implants.
Three factors account for the high level of speech understanding in quiet enjoyed by many patients fit with cochlear implants. First, some information about speech exists in the time/amplitude envelope of speech. This information is sufficient to narrow the number of word candidates for a given signal.
View Article and Find Full Text PDFObjective: The aim of the present experiment was to assess the consequences of cochlear implantation at different ages on the development of the human central auditory system.
Design: Our measure of the maturity of central auditory pathways was the latency of the P1 cortical auditory evoked potential. Because P1 latencies vary as a function of chronological age, they can be used to infer the maturational status of auditory pathways in congenitally deafened children who regain hearing after being fit with a cochlear implant.