This study examined speech intelligibility and preferences for omnidirectional and directional microphone hearing aid processing across a range of signal-to-noise ratios (SNRs). A primary motivation for the study was to determine whether SNR might be used to represent distance between talker and listener in automatic directionality algorithms based on scene analysis. Participants were current hearing aid users who either had experience with omnidirectional microphone hearing aids only or with manually switchable omnidirectional/directional hearing aids. Using IEEE/Harvard sentences from a front loudspeaker and speech-shaped noise from three loudspeakers located behind and to the sides of the listener, the directional advantage (DA) was obtained at 11 SNRs ranging from -15 dB to +15 dB in 3 dB steps. Preferences for the two microphone modes at each of the 11 SNRs were also obtained using concatenated IEEE sentences presented in the speech-shaped noise. Results revealed that a DA was observed across a broad range of SNRs, although directional processing provided the greatest benefit within a narrower range of SNRs. Mean data suggested that microphone preferences were determined largely by the DA, such that the greater the benefit to speech intelligibility provided by the directional microphones, the more likely the listeners were to prefer that processing mode. However, inspection of the individual data revealed that highly predictive relationships did not exist for most individual participants. Few preferences for omnidirectional processing were observed. Overall, the results did not support the use of SNR to estimate the effects of distance between talker and listener in automatic directionality algorithms.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.3766/jaaa.16.9.4 | DOI Listing |
J Acoust Soc Am
January 2025
Acoustic Technology, Department of Electrical & Photonics Engineering, Technical University of Denmark, Kongens Lyngby, Denmark.
Characterising acoustic fields in rooms is challenging due to the complexity of data acquisition. Sound field reconstruction methods aim at predicting the acoustic quantities at positions where no data are available, incorporating generalisable physical priors of the sound in a room. This study introduces a model that exploits the general time structure of the room impulse response, where a wave-based expansion addresses the direct sound and early reflections, localising their apparent origin, and kernel methods are applied to the late part.
View Article and Find Full Text PDFSensors (Basel)
November 2024
Department of Computer and Electrical Engineering, Mid Sweden University, 851 70 Sundsvall, Sweden.
Traditional spherical sector microphone arrays using omnidirectional microphones face limitations in modal strength and spatial resolution, especially within spherical sector configurations. This study aims to enhance array performance by developing a spherical sector array employing first-order cardioid microphones. A model based on spherical sector harmonic (SSH) functions is introduced to extend the benefits of spherical harmonics to sector arrays.
View Article and Find Full Text PDFJ Acoust Soc Am
December 2024
Department of Computer Science, Acoustics Lab, Aalto University, P.O. Box 15400, FI-00076 Aalto, Finland.
J Speech Lang Hear Res
January 2025
Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE.
Introduction: We currently lack speech testing materials faithful to broader aspects of real-world auditory scenes such as speech directivity and extended high frequency (EHF; > 8 kHz) content that have demonstrable effects on speech perception. Here, we describe the development of a multidirectional, high-fidelity speech corpus using multichannel anechoic recordings that can be used for future studies of speech perception in complex environments by diverse listeners.
Design: Fifteen male and 15 female talkers (21.
Ear Hear
November 2024
Department of Communication Sciences & Disorders, Northwestern University, Evanston, Illinois, USA.
Objectives: Previous research has shown that speech recognition with different wide dynamic range compression (WDRC) time-constants (fast-acting or Fast and slow-acting or Slow) is associated with individual working memory ability, especially in adverse listening conditions. Until recently, much of this research has been limited to omnidirectional hearing aid settings and colocated speech and noise, whereas most hearing aids are fit with directional processing that may improve the listening environment in spatially separated conditions and interact with WDRC processing. The primary objective of this study was to determine whether there is an association between individual working memory ability and speech recognition in noise with different WDRC time-constants, with and without microphone directionality (binaural beamformer or Beam versus omnidirectional or Omni) in a spatial condition ideal for the beamformer (speech at 0 , noise at 180 ).
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!