Publications by authors named "Brimijoin W"

To optimally improve signal-to-noise ratio in noisy environments, a hearing assistance device must correctly identify what is signal and what is noise. Many of the biosignal-based approaches to solving this question are themselves subject to noise, but head angle is an overt behavior that may be possible to capture in practical devices in the real world. Previous orientation studies have demonstrated that head angle is systematically related to listening target; our study aimed to examine whether this relationship is sufficiently reliable to be used in group conversations where participants may be seated in different layouts and the listener is free to turn their body as well as their head.

View Article and Find Full Text PDF

Those experiencing hearing loss face severe challenges in perceiving speech in noisy situations such as a busy restaurant or cafe. There are many factors contributing to this deficit including decreased audibility, reduced frequency resolution, and decline in temporal synchrony across the auditory system. Some hearing assistive devices implement beamforming in which multiple microphones are used in combination to attenuate surrounding noise while the target speaker is left unattenuated.

View Article and Find Full Text PDF

Linear comparisons can fail to describe perceptual differences between head-related transfer functions (HRTFs), reducing their utility for perceptual tests, HRTF selection methods, and prediction algorithms. This work introduces a machine learning framework for constructing a perceptual error metric that is aligned with performance in human sound localization. A neural network is first trained to predict measurement locations from a large database of HRTFs and then fine-tuned with perceptual data.

View Article and Find Full Text PDF

Speech intelligibility (SI) is known to be affected by the relative spatial position between target and interferers. The benefit of a spatial separation is, along with other factors, related to the head-related transfer function (HRTF). The HRTF is individually different and thus, the cues that affect SI might also be different.

View Article and Find Full Text PDF

Many conversations in our day-to-day lives are held in noisy environments - impeding comprehension, and in groups - taxing auditory attention-switching processes. These situations are particularly challenging for older adults in cognitive and sensory decline. In noisy environments, a variety of extra-linguistic strategies are available to speakers and listeners to facilitate communication, but while models of language account for the impact of context on word choice, there has been little consideration of the impact of context on extra-linguistic behaviour.

View Article and Find Full Text PDF
Article Synopsis
  • People have conversations even when there is a lot of noise around them, and this study looked at how they do that.
  • When the noise got louder, people talked louder and moved closer to each other, but this only helped a little bit.
  • As noise increased, conversations became shorter, and people looked more at each other's mouths, showing it was harder for them to understand each other.
View Article and Find Full Text PDF

By moving sounds around the head and asking listeners to report which ones moved more, it was found that sound sources at the side of a listener must move at least twice as much as ones in front to be judged as moving the same amount. A relative expansion of space in the front and compression at the side has consequences for spatial perception of moving sounds by both static and moving listeners. An accompanying prediction that the apparent location of static sound sources ought to also be distorted agrees with previous work and suggests that this is a general perceptual phenomenon that is not limited to moving signals.

View Article and Find Full Text PDF

Distance is important: From an ecological perspective, knowledge about the distance to either prey or predator is vital. However, the distance of an unknown sound source is particularly difficult to assess, especially in anechoic environments. In vision, changes in perspective resulting from observer motion produce a reliable, consistent, and unambiguous impression of depth known as motion parallax.

View Article and Find Full Text PDF

The manuscript proposes and evaluates a real-time algorithm for estimating eye gaze angle based solely on single-channel electrooculography (EOG), which can be obtained directly from the ear canal using conductive ear moulds. In contrast to conventional high-pass filtering, we used an algorithm that calculates absolute eye gaze angle via statistical analysis of detected saccades. The estimated eye positions of the new algorithm were still noisy.

View Article and Find Full Text PDF

The signal-to-noise ratio (SNR) benefit of hearing aid directional microphones is dependent on the angle of the listener relative to the target, something that can change drastically and dynamically in a typical group conversation. When a new target signal is significantly off-axis, directional microphones lead to slower target orientation, more complex movements, and more reversals. This raises the question of whether there is an optimal design for directional microphones.

View Article and Find Full Text PDF

A key function of the brain is to provide a stable representation of an object's location in the world. In hearing, sound azimuth and elevation are encoded by neurons throughout the auditory system, and auditory cortex is necessary for sound localization. However, the coordinate frame in which neurons represent sound space remains undefined: classical spatial receptive fields in head-fixed subjects can be explained either by sensitivity to sound source location relative to the head (egocentric) or relative to the world (allocentric encoding).

View Article and Find Full Text PDF

Hearing is confronted by a similar problem to vision when the observer moves. The image motion that is created remains ambiguous until the observer knows the velocity of eye and/or head. One way the visual system solves this problem is to use motor commands, proprioception, and vestibular information.

View Article and Find Full Text PDF

Background: There are two cues that listeners use to disambiguate the front/back location of a sound source: high-frequency spectral cues associated with the head and pinnae, and self-motion-related binaural cues. The use of these cues can be compromised in listeners with hearing impairment and users of hearing aids.

Purpose: To determine how age, hearing impairment, and the use of hearing aids affect a listener's ability to determine front from back based on both self-motion and spectral cues.

View Article and Find Full Text PDF

Effective use of exogenous human BChE as a bioscavenger for organophosphorus toxicants (OPs) is hindered by its limited availability and rapid clearance. Complexes made from recombinant human BChE (rhBChE) and copolymers may be useful in addressing these problems. We used in vitro approaches to compare enzyme activity, sensitivity to inhibition, stability and bioscavenging capacity of free enzyme and copolymer-rhBChE complexes (C-BCs) based on one of nine different copolymers, from combinations of three molecular weights (MW) of poly-L-lysine (PLL; high MW, 30-70 kDa; medium MW, 15-30 kDa; low MW, 4-15 kDa) and three grafting ratios of poly(ethylene glycol) (PEG; 2:1, 10:1, 20:1).

View Article and Find Full Text PDF

Sound sources at the same angle in front or behind a two-microphone array (e.g., bilateral hearing aids) produce the same time delay and two estimates for the direction of arrival: A front-back confusion.

View Article and Find Full Text PDF

our heads rotate in three axes and move in three dimensions, constantly varying the spectral and binaural cues at the ear drums. In spite of this motion, static sound sources in the world are typically perceived as stable objects. This argues that the auditory system-in a manner not unlike the vestibulo-ocular reflex-works to compensate for self motion and stabilize our sensory representation of the world.

View Article and Find Full Text PDF

Objectives: Although directional microphones on a hearing aid provide a signal-to-noise ratio benefit in a noisy background, the amount of benefit is dependent on how close the signal of interest is to the front of the user. It is assumed that when the signal of interest is off-axis, users can reorient themselves to the signal to make use of the directional microphones to improve signal-to-noise ratio. The present study tested this assumption by measuring the head-orienting behavior of bilaterally fit hearing-impaired individuals with their microphones set to omnidirectional and directional modes.

View Article and Find Full Text PDF

Background: When stimuli are presented over headphones, they are typically perceived as internalized; i.e., they appear to emanate from inside the head.

View Article and Find Full Text PDF

Listeners presented with noise were asked to press a key whenever they heard the vowels [a] or [i:]. The noise had a random spectrum, with levels in 60 frequency bins changing every 0.5 s.

View Article and Find Full Text PDF

We used a dynamic auditory spatial illusion to investigate the role of self-motion and acoustics in shaping our spatial percept of the environment. Using motion capture, we smoothly moved a sound source around listeners as a function of their own head movements. A lowpass filtered sound behind a listener that moved in the direction it would have moved if it had been located in the front was perceived as statically located in front.

View Article and Find Full Text PDF

It has long been understood that the level of a sound at the ear is dependent on head orientation, but the way in which listeners move their heads during listening has remained largely unstudied. Given the task of understanding a speech signal in the presence of a simultaneous noise, listeners could potentially use head orientation to either maximize the level of the signal in their better ear, or to maximize the signal-to-noise ratio in their better ear. To establish what head orientation strategy listeners use in a speech comprehension task, we used an infrared motion-tracking system to measure the head movements of 36 listeners with large (>16 dB) differences in hearing threshold between their left and right ears.

View Article and Find Full Text PDF

Head movements are intimately involved in sound localization and may provide information that could aid an impaired auditory system. Using an infrared camera system, head position and orientation was measured for 17 normal-hearing and 14 hearing-impaired listeners seated at the center of a ring of loudspeakers. Listeners were asked to orient their heads as quickly as was comfortable toward a sequence of visual targets, or were blindfolded and asked to orient toward a sequence of loudspeakers playing a short sentence.

View Article and Find Full Text PDF

Linear measures of auditory receptive fields do not always fully account for a neuron's response to spectrotemporally-complex signals such as frequency-modulated sweeps (FM) and communication sounds. A possible source of this discrepancy is cross-frequency interactions, common response properties which may be missed by linear receptive fields but captured using two-tone masking. Using a patterned tonal sequence that included a balanced set of all possible tone-to-tone transitions, we have here combined the spectrotemporal receptive field with two-tone masking to measure spectrotemporal response maps (STRM).

View Article and Find Full Text PDF

Two-tone stimuli have traditionally been used to reveal regions of inhibition in auditory spectral receptive fields, particularly for neurons with low spontaneous rates. These techniques reveal how different frequencies excite or suppress the response to an excitatory frequency of a cell, but have often been assessed at a fixed masker-probe time interval. We used a variation of this methodology to determine whether two-tone spectrotemporal interactions can account for rate-dependent directional selectivity for frequency modulations (FM) in the mustached bat inferior colliculus (IC).

View Article and Find Full Text PDF

Mustached bats emit echolocation and communication calls containing both constant frequency (CF) and frequency-modulated (FM) components. Previously we found that 86% of neurons in the ventral division of the external nucleus of the inferior colliculus (ICXv) were directionally selective for linear FM sweeps and that selectivity was dependent on sweep rate. The ICXv projects to the suprageniculate nucleus (Sg) of the medial geniculate body.

View Article and Find Full Text PDF