Purpose: Children with typical hearing and various language and cognitive challenges can struggle with processing speech in background noise. Thus, children with a language disorder (LD) are at risk for difficulty with speech recognition in poorer acoustic environments.
Method: The current study compared the effects of background speech-shaped noise (SSN) with and without reverberation on sentence recognition for children with LD ( = 9) and typical language development (TLD; = 9).
Introduction: We currently lack speech testing materials faithful to broader aspects of real-world auditory scenes such as speech directivity and extended high frequency (EHF; > 8 kHz) content that have demonstrable effects on speech perception. Here, we describe the development of a multidirectional, high-fidelity speech corpus using multichannel anechoic recordings that can be used for future studies of speech perception in complex environments by diverse listeners.
Design: Fifteen male and 15 female talkers (21.
Gender and language effects on the long-term average speech spectrum (LTASS) have been reported, but typically using recordings that were bandlimited and/or failed to accurately capture extended high frequencies (EHFs). Accurate characterization of the full-band LTASS is warranted given recent data on the contribution of EHFs to speech perception. The present study characterized the LTASS for high-fidelity, anechoic recordings of males and females producing Bamford-Kowal-Bench sentences, digits, and unscripted narratives.
View Article and Find Full Text PDFSpectral weighting functions for sound localization were measured in participants with bilateral mild sloping to moderately severe, high-frequency sensorineural hearing loss (SNHL) and compared to normal hearing (NH) participants with and without simulated SNHL. Each participant group localized three types of complex tones, comprised of seven frequency components spatially jittered and presented from the horizontal frontal field. A threshold-elevating noise masker was implemented in the free field to simulate SNHL for participants with NH.
View Article and Find Full Text PDFPurpose: The interaural time difference (ITD) is a primary horizontal-plane sound localization cue computed in the auditory brainstem. ITDs are accessible in the temporal fine structure of pure tones with a frequency of no higher than about 1400 Hz. How listeners' ITD sensitivity transitions from very best sensitivity near 700 Hz to impossible to detect within 1 octave currently lacks a fully compelling physiological explanation.
View Article and Find Full Text PDFObjectives: Audiometric testing typically does not include frequencies above 8 kHz. However, recent research suggests that extended high-frequency (EHF) sensitivity could affect hearing in natural communication environments. Clinical assessment of hearing often employs pure tones and frequency-modulated (FM) tones interchangeably regardless of frequency.
View Article and Find Full Text PDFPurpose: The interaural time difference (ITD) is a primary horizontal-plane sound localization cue computed in the auditory brainstem. ITDs are accessible in the temporal fine structure of pure tones with a frequency of no higher than about 1400 Hz. Explaining how listeners' ITD sensitivity transitions from very best sensitivity near 700 Hz to impossible to detect within 1 octave currently lacks a fully compelling physiological explanation.
View Article and Find Full Text PDFSpectral weighting of sound localization cues was measured in the presence of three levels of competing noise presented in the free field. Target stimuli were complex tones containing seven tonal components, presented from an ∼120° range of frontal azimuths. Competitors were two independent Gaussian noises presented from 90° left and right azimuth at one of three levels yielding +9, 0, and -6 dB signal-to-noise ratio.
View Article and Find Full Text PDFBackground: Functional near-infrared spectroscopy (fNIRS) is a viable non-invasive technique for functional neuroimaging in the cochlear implant (CI) population; however, the effects of acoustic stimulus features on the fNIRS signal have not been thoroughly examined. This study examined the effect of stimulus level on fNIRS responses in adults with normal hearing or bilateral CIs. We hypothesized that fNIRS responses would correlate with both stimulus level and subjective loudness ratings, but that the correlation would be weaker with CIs due to the compression of acoustic input to electric output.
View Article and Find Full Text PDFPurpose: Difficulty understanding speech in noise is a common communication problem. Clinical tests of speech in noise differ considerably from real-world listening and offer patients limited intrinsic motivation to perform well. In order to design a test that captures motivational aspects of real-world communication, this study investigated effects of gamification, or the inclusion of game elements, on a laboratory spatial release from masking test.
View Article and Find Full Text PDFBackground: Remote-microphone (RM) systems are designed to reduce the impact of poor acoustics on speech understanding. However, there is limited research examining the effects of adding reverberation to noise on speech understanding when using hearing aids (HAs) and RM systems. Given the significant challenges posed by environments with poor acoustics for children who are hard of hearing, we evaluated the ability of a novel RM system to address the effects of noise and reverberation.
View Article and Find Full Text PDFPerceptual weighting of sound localization cues across spectral components was measured over headphones [experiment (expt.) 1] and in the free field (expt. 2) and quantified in the form of spectral weighting functions (SWFs).
View Article and Find Full Text PDFAcoustics research involving human participants typically takes place in specialized laboratory settings. Listening studies, for example, may present controlled sounds using calibrated transducers in sound-attenuating or anechoic chambers. In contrast, remote testing takes place outside of the laboratory in everyday settings (e.
View Article and Find Full Text PDFUnequal weighting of binaural information across frequency can reduce sensitivity in the presence of competing but uninformative cues ("binaural interference"), a potentially serious problem for listeners who use combined electric and acoustic (EAS) hearing. Here, we used virtual-reality techniques to measure spectral weighting functions (SWF) during localization of simulated EAS stimuli [see van Ginkel et al., 2019, JASA 145, 2445-52]: low-frequency "acoustic" noise bands and high-frequency "electric" click trains.
View Article and Find Full Text PDFActive exploration changes how the brain processes auditory space. A new study reveals the dynamic properties of neural activity that encode combinations of sound identity and location during active sensing.
View Article and Find Full Text PDFPurpose: Electric and acoustic stimulation (EAS) with preserved hearing in the implanted ear provides benefit for speech understanding, spatial hearing, and quality of life in adults. However, there is limited research on EAS outcomes in children. The aims of this study were to estimate the magnitude of EAS-related benefit on speech understanding in children with preserved acoustic hearing and to determine what role acoustic interaural time difference (ITD) sensitivity may have on said EAS benefit.
View Article and Find Full Text PDFA classic paradigm used to quantify the perceptual weighting of binaural spatial cues requires a listener to adjust the value of one cue, while the complementary cue is held constant. Adjustments are made until the auditory percept appears centered in the head, and the values of both cues are recorded as a trading relation (TR), most commonly in μs interaural time difference per dB interaural level difference. Interestingly, existing literature has shown that TRs differ according to the cue being adjusted.
View Article and Find Full Text PDFBilateral acoustic hearing in cochlear implant (CI) recipients with hearing preservation may allow access to binaural cues. Sensitivity to acoustic binaural cues has been shown in some listeners combining electric and acoustic stimulation (EAS), yet remains poorly understood and may be subject to limitations imposed by the electrical stimulation and/or amplification asymmetries. The purpose of this study was to investigate the effect of stimulus level, frequency-dependent gain, and the addition of unilateral electrical stimulation on sensitivity to low-frequency binaural cues.
View Article and Find Full Text PDFPreserved low-frequency acoustic hearing in cochlear implant (CI) recipients affords combined electric-acoustic stimulation (EAS) that could improve access to low-frequency acoustic binaural cues and enhance spatial hearing. Such benefits, however, could be undermined by interactions between electrical and acoustical inputs to adjacent (spectral overlap) or distant (binaural interference) cochlear places in EAS. This study simulated EAS in normal-hearing listeners, measuring interaural time difference (ITD) and interaural level difference (ILD) discrimination thresholds for a low-frequency noise (simulated acoustic target) in the presence or absence of a pulsatile high-frequency complex presented monotically or diotically (simulated unilateral or bilateral electric distractor).
View Article and Find Full Text PDFSound onsets dominate spatial judgments of many types of periodic sound. Conversely, ongoing cues often dominate in spatial judgments of aperiodic noise. This study quantified onset dominance as a function of both the bandwidth and the temporal regularity of stimuli by measuring temporal weighting functions (TWF) from Stecker, Ostreicher, and Brown [(2013) J.
View Article and Find Full Text PDFTemporal variation in sensitivity to sound-localization cues was measured in anechoic conditions and in simulated reverberation using the temporal weighting function (TWF) paradigm [Stecker and Hafter (2002). J. Acoust.
View Article and Find Full Text PDFAtten Percept Psychophys
January 2018
Simultaneity judgments were used to measure temporal binding windows (TBW) for brief binaural events (changes in interaural time and/or level differences [ITD and ILD]) and test the hypothesis that ITD and ILD contribute to perception via separate sensory dimensions subject to binding via slow (100+ ms)-presumably cortical-mechanisms as in multisensory TBW. Stimuli were continuous low-frequency noises that included two brief shifts of either type (ITD or ILD), both of which are heard as lateral position changes. TBW for judgments within a single cue dimension were narrower for ITD (mean = 444 ms) than ILD (807 ms).
View Article and Find Full Text PDFProc Natl Acad Sci U S A
September 2017
Few auditory functions are as important or as universal as the capacity for auditory spatial awareness (e.g., sound localization).
View Article and Find Full Text PDFHuman listeners place greater weight on the beginning of a sound compared to the middle or end when determining sound location, creating an auditory illusion known as the Franssen effect. Here, we exploited that effect to test whether human auditory cortex (AC) represents the physical vs. perceived spatial features of a sound.
View Article and Find Full Text PDF