Publications by authors named "Sandeep Phatak"

Psychoacoustic stimulus presentation to the cochlear implant via direct audio input (DAI) is no longer possible for many newer sound processors (SPs). This study assessed the feasibility of placing circumaural headphones over the SP. Calibration spectra for loudspeaker, DAI, and headphone modalities were estimated by measuring cochlear-implant electrical output levels for tones presented to SPs on an acoustic manikin.

View Article and Find Full Text PDF

Objectives: Estimated prevalence of functional hearing and communication deficits (FHCDs), characterized by abnormally low speech recognition and binaural tone detection in noise or an abnormally high degree of self-perceived hearing difficulties, dramatically increases in active-duty service members (SMs) who have hearing thresholds slightly above the normal range and self-report to have been close to an explosive blast. Knowing the exact nature of the underlying auditory-processing deficits that contribute to FHCD would not only provide a better characterization of the effects of blast exposure on the human auditory system, but also allow clinicians to prescribe appropriate therapies to treat or manage patient complaints.

Design: Two groups of SMs were initially recruited: (1) a control group (N = 78) with auditory thresholds ≤20 dB HL between 250 and 8000 Hz, no history of blast exposure, and who passed a short FHCD screener, and (2) a group of blast-exposed SMs (N = 26) with normal to near-normal auditory thresholds between 250 and 4000 Hz, and who failed the FHCD screener (cutoffs based on the study by Grant et al.

View Article and Find Full Text PDF

Closed-set consonant identification, measured using nonsense syllables, has been commonly used to investigate the encoding of speech cues in the human auditory system. Such tasks also evaluate the robustness of speech cues to masking from background noise and their impact on auditory-visual speech integration. However, extending the results of these studies to everyday speech communication has been a major challenge due to acoustic, phonological, lexical, contextual, and visual speech cue differences between consonants in isolated syllables and in conversational speech.

View Article and Find Full Text PDF

Hypothesis: Bilateral cochlear-implant (BI-CI) users will have a range of interaural insertion-depth mismatch because of different array placement or characteristics. Mismatch will be larger for electrodes located near the apex or outside scala tympani, or for arrays that are a mix of precurved and straight types.

Background: Brainstem superior olivary-complex neurons are exquisitely sensitive to interaural-difference cues for sound localization.

View Article and Find Full Text PDF

Objectives: For listeners with one deaf ear and the other ear with normal/near-normal hearing (single-sided deafness [SSD]) or moderate hearing loss (asymmetric hearing loss), cochlear implants (CIs) can improve speech understanding in noise and sound-source localization. Previous SSD-CI localization studies have used a single source with artificial sounds such as clicks or random noise. While this approach provides insights regarding the auditory cues that facilitate localization, it does not capture the complex nature of localization behavior in real-world environments.

View Article and Find Full Text PDF

Objectives: Over the past decade, U.S. Department of Defense and Veterans Affairs audiologists have reported large numbers of relatively young adult patients who have normal to near-normal audiometric thresholds but who report difficulty understanding speech in noisy environments.

View Article and Find Full Text PDF

Effects of temporal distortions on consonant perception were measured using locally time-reversed nonsense syllables. Consonant recognition was measured in both audio and audio-visual modalities for assessing whether the addition of visual speech cues can recover consonant errors caused by time reversing. The degradation in consonant recognition depended highly on the manner of articulation, with sibilant fricatives, affricates, and nasals showing the least degradation.

View Article and Find Full Text PDF

Objective: The clinical evaluation of hearing loss, using a pure-tone audiogram, is not adequate to assess the functional hearing capabilities (or handicap) of a patient, especially the speech-in-noise communication difficulties. The primary objective of this study was to measure the effect of elevated hearing thresholds on the recognition performance in various functional speech-in-noise tests that cover acoustic scenes of different complexities and to identify the subset of tests that (a) were sensitive to individual differences in hearing thresholds and (b) provide complementary information to the audiogram. A secondary goal was to compare the performance on this test battery with the self-assessed performance level of functional hearing abilities.

View Article and Find Full Text PDF

Objective: To evaluate the speech-in-noise performance of listeners with different levels of hearing loss in a variety of complex listening environments.

Design: The quick speech-in-noise (QuickSIN)-based test battery was used to measure the speech recognition performance of listeners with different levels of hearing loss. Subjective estimates of speech reception thresholds (SRTs) corresponding to 100% and 0% speech intelligibility, respectively, were obtained using a method of adjustment before objective measurement of the actual SRT corresponding to 50% speech intelligibility in every listening condition.

View Article and Find Full Text PDF

Since 1992, the Speech Recognition in Noise Test, or SPRINT, has been the standard speech-in-noise test for assessing auditory fitness-for-duty of US Army Soldiers with hearing loss. The original SPRINT test consisted of 200 monosyllabic words presented at a Signal-to-Noise Ratio (SNR) of +9 dB in the presence of a six-talker babble noise. Normative data for the test was collected on 319 hearing impaired Soldiers, and a procedure for making recommendations about the disposition of military personnel on the basis of their SPRINT score and their years of experience was developed and implemented as part of US Army policy.

View Article and Find Full Text PDF

This study compared modulation benefit for phoneme recognition obtained by normal-hearing (NH) and aided hearing-impaired (HI) listeners. Consonant and vowel recognition scores were measured using nonsense syllables in the presence of a steady-state noise and four vocoded speech maskers. Vocoded maskers were generated by modulating the steady-state noise, in either one or six frequency channels, with the speech envelope extracted from the speech of either a single talker or a four-talker babble.

View Article and Find Full Text PDF

This study measured the influence of masker fluctuations on phoneme recognition. The first part of the study compared the benefit of masker modulations for consonant and vowel recognition in normal-hearing (NH) listeners. Recognition scores were measured in steady-state and sinusoidally amplitude-modulated noise maskers (100% modulation depth) at several modulation rates and signal-to-noise ratios.

View Article and Find Full Text PDF

This paper presents a compact graphical method for comparing the performance of individual hearing impaired (HI) listeners with that of an average normal hearing (NH) listener on a consonant-by-consonant basis. This representation, named the consonant loss profile (CLP), characterizes the effect of a listener's hearing loss on each consonant over a range of performance. The CLP shows that the consonant loss, which is the signal-to-noise ratio (SNR) difference at equal NH and HI scores, is consonant-dependent and varies with the score.

View Article and Find Full Text PDF

The classic [MN55] confusion matrix experiment (16 consonants, white noise masker) was repeated by using computerized procedures, similar to those of Phatak and Allen (2007). ["Consonant and vowel confusions in speech-weighted noise," J. Acoust.

View Article and Find Full Text PDF

This paper presents the results of a closed-set recognition task for 64 consonant-vowel sounds (16 C X 4 V, spoken by 18 talkers) in speech-weighted noise (-22,-20,-16,-10,-2 [dB]) and in quiet. The confusion matrices were generated using responses of a homogeneous set of ten listeners and the confusions were analyzed using a graphical method. In speech-weighted noise the consonants separate into three sets: a low-scoring set C1 (/f/, /theta/, /v/, /d/, /b/, /m/), a high-scoring set C2 (/t/, /s/, /z/, /S/, /Z/) and set C3 (/n/, /p/, /g/, /k/, /d/) with intermediate scores.

View Article and Find Full Text PDF