34 results match your criteria: "Starkey Hearing Research Center[Affiliation]"

One-Step, Three-Factor Passthought Authentication With Custom-Fit, In-Ear EEG.

Front Neurosci

April 2019

BioSENSE, School of Information, University of California, Berkeley, Berkeley, CA, United States.

In-ear EEG offers a promising path toward usable, discreet brain-computer interfaces (BCIs) for both healthy individuals and persons with disabilities. To test the promise of this modality, we produced a brain-based authentication system using custom-fit EEG earpieces. In a sample of = 7 participants, we demonstrated that our system has high accuracy, higher than prior work using non-custom earpieces.

View Article and Find Full Text PDF

This systematic review investigated if hearing aid use was associated with acute improvements in cognitive function in hearing-impaired adults. The review question and inclusion/exclusion criteria were designed using the Population, Intervention, Control, Outcomes, and Study design (PICOS) mnemonic. The review was pre-registered in the International Prospective Register of Systematic Review (PROSPERO) and performed in accordance with the statement on Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA).

View Article and Find Full Text PDF

While wide dynamic range compression (WDRC) is a standard feature of modern hearing aids, it can be difficult to fit compression settings to individual hearing aid users. The goal of the current study was to develop a practical test to learn the preference of individual listeners for different compression ratio (CR) settings in different listening conditions (speech-in-quiet and speech-in-noise). While it is possible to exhaustively test different CR settings, such methods can take many hours to complete, making them impractical.

View Article and Find Full Text PDF

Objectives: The objective of this work was to build a 15-item short-form of the Speech Spatial and Qualities of Hearing Scale (SSQ) that maintains the three-factor structure of the full form, using a data-driven approach consistent with internationally recognized procedures for short-form building. This included the validation of the new short-form on an independent sample and an in-depth, comparative analysis of all existing, full and short SSQ forms.

Design: Data from a previous study involving 98 normal-hearing (NH) individuals and 196 people with hearing impairments (HI), non hearing aid wearers, along with results from several other published SSQ studies, were used for developing the short-form.

View Article and Find Full Text PDF

Tracking the dynamic representation of consonants from auditory periphery to cortex.

J Acoust Soc Am

October 2018

Auditory Neuroscience Laboratory, School of Medical Sciences, The University of Sydney, Sydney, New South Wales 2006, Australia.

In order to perceive meaningful speech, the auditory system must recognize different phonemes amidst a noisy and variable acoustic signal. To better understand the processing mechanisms underlying this ability, evoked cortical responses to different spoken consonants were measured with electroencephalography (EEG). Using multivariate pattern analysis (MVPA), binary classifiers attempted to discriminate between the EEG activity evoked by two given consonants at each peri-stimulus time sample, providing a dynamic measure of their cortical dissimilarity.

View Article and Find Full Text PDF

The perception of simple auditory mixtures is known to evolve over time. For instance, a common example of this is the "buildup" of stream segregation that is observed for sequences of tones alternating in pitch. Yet very little is known about how the perception of more complicated auditory scenes, such as multitalker mixtures, changes over time.

View Article and Find Full Text PDF

Using a same-different discrimination task, it has been shown that discrimination performance for sequences of complex tones varying just detectably in pitch is less dependent on sequence length (1, 2, or 4 elements) when the tones contain resolved harmonics than when they do not [Cousineau, Demany, and Pessnitzer (2009). J. Acoust.

View Article and Find Full Text PDF

Many hearing-aid wearers have difficulties understanding speech in reverberant noisy environments. This study evaluated the effects of reverberation and noise on speech recognition in normal-hearing listeners and hearing-impaired listeners wearing hearing aids. Sixteen typical acoustic scenes with different amounts of reverberation and various types of noise maskers were simulated using a loudspeaker array in an anechoic chamber.

View Article and Find Full Text PDF

To better understand issues of hearing-aid benefit during natural listening, this study examined the added demand placed by the goal of understanding speech over the more typically studied goal of simply recognizing speech sounds. The study compared hearing-aid benefit in two conditions, and examined factors that might account for the observed benefits. In the phonetic condition, listeners needed only identify the correct sound to make a correct response.

View Article and Find Full Text PDF

Over 360 million people worldwide suffer from disabling hearing loss. Most of them can be treated with hearing aids. Unfortunately, performance with hearing aids and the benefit obtained from using them vary widely across users.

View Article and Find Full Text PDF

Development and evaluation of a mixed gender, multi-talker matrix sentence test in Australian English.

Int J Audiol

February 2017

a School of Medical Sciences and The Bosch Institute, University of Sydney, Sydney , New South Wales , Australia and.

Objective: To develop, in Australian English, the first mixed-gender, multi-talker matrix sentence test.

Design: Speech material consisted of a 50-word base matrix whose elements can be combined to form sentences of identical syntax but unpredictable content. Ten voices (five female and five male) were recorded for editing and preliminary level equalization.

View Article and Find Full Text PDF

The Influence of Cochlear Mechanical Dysfunction, Temporal Processing Deficits, and Age on the Intelligibility of Audible Speech in Noise for Hearing-Impaired Listeners.

Trends Hear

September 2016

Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Spain Instituto de Investigación Biomédica de Salamanca, Universidad de Salamanca, Spain Departamento de Cirugía, Facultad de Medicina, Universidad de Salamanca, Spain

The aim of this study was to assess the relative importance of cochlear mechanical dysfunction, temporal processing deficits, and age on the ability of hearing-impaired listeners to understand speech in noisy backgrounds. Sixty-eight listeners took part in the study. They were provided with linear, frequency-specific amplification to compensate for their audiometric losses, and intelligibility was assessed for speech-shaped noise (SSN) and a time-reversed two-talker masker (R2TM).

View Article and Find Full Text PDF

Successful speech perception in real-world environments requires that the auditory system segregate competing voices that overlap in frequency and time into separate streams. Vowels are major constituents of speech and are comprised of frequencies (harmonics) that are integer multiples of a common fundamental frequency (F0). The pitch and identity of a vowel are determined by its F0 and spectral envelope (formant structure), respectively.

View Article and Find Full Text PDF

Sensitivity to Auditory Velocity Contrast.

Sci Rep

June 2016

School of Medical Sciences, University of Sydney, NSW 2006 Australia.

A natural auditory scene often contains sound moving at varying velocities. Using a velocity contrast paradigm, we compared sensitivity to velocity changes between continuous and discontinuous trajectories. Subjects compared the velocities of two stimulus intervals that moved along a single trajectory, with and without a 1 second inter stimulus interval (ISI).

View Article and Find Full Text PDF

The Perception of Auditory Motion.

Trends Hear

April 2016

School of Medical Sciences, University of Sydney, NSW, Australia.

The growing availability of efficient and relatively inexpensive virtual auditory display technology has provided new research platforms to explore the perception of auditory motion. At the same time, deployment of these technologies in command and control as well as in entertainment roles is generating an increasing need to better understand the complex processes underlying auditory motion perception. This is a particularly challenging processing feat because it involves the rapid deconvolution of the relative change in the locations of sound sources produced by rotational and translations of the head in space (self-motion) to enable the perception of actual source motion.

View Article and Find Full Text PDF

This study investigated whether spatial separation between talkers helps reduce cognitive processing load, and how hearing impairment interacts with the cognitive load of individuals listening in multi-talker environments. A dual-task paradigm was used in which performance on a secondary task (visual tracking) served as a measure of the cognitive load imposed by a speech recognition task. Visual tracking performance was measured under four conditions in which the target and the interferers were distinguished by (1) gender and spatial location, (2) gender only, (3) spatial location only, and (4) neither gender nor spatial location.

View Article and Find Full Text PDF

In the real world, listeners often need to track multiple simultaneous sources in order to maintain awareness of the relevant sounds in their environments. Thus, there is reason to believe that simple single source sound localization tasks may not accurately capture the impact that a listening device such as a hearing aid might have on a listener's level of auditory awareness. In this experiment, 10 normal hearing listeners and 20 hearing impaired listeners were tested in a task that required them to identify and localize sound sources in three different listening tasks of increasing complexity: a single-source localization task, where listeners identified and localized a single sound source presented in isolation; an added source task, where listeners identified and localized a source that was added to an existing auditory scene, and a remove source task, where listeners identified and localized a source that was removed from an existing auditory scene.

View Article and Find Full Text PDF

The ability to attend to a particular sound in a noisy environment is an essential aspect of hearing. To accomplish this feat, the auditory system must segregate sounds that overlap in frequency and time. Many natural sounds, such as human voices, consist of harmonics of a common fundamental frequency (F0).

View Article and Find Full Text PDF

This study compared the head-related transfer functions (HRTFs) recorded from the bare ear of a mannequin for 393 spatial locations and for five different hearing aid styles: Invisible-in-the-canal (IIC), completely-in-the-canal (CIC), in-the-canal (ITC), in-the-ear (ITE), and behind-the-ear (BTE). The spectral distortions of each style compared to the bare ear were described qualitatively in terms of the gain and frequency characteristics of the prominent spectral notch and two peaks in the HRTFs. Two quantitative measures of the differences between the HRTF sets and a measure of the dissimilarity of the HRTFs within each set were also computed.

View Article and Find Full Text PDF

The aim of this study was to investigate changes in central auditory processing following unilateral and bilateral hearing aid fitting using a combination of physiological and behavioral measures: late auditory event-related potentials (ERPs) and speech recognition in noise, respectively. The hypothesis was that for fitted ears, the ERP amplitude would increase over time following hearing aid fitting in parallel with improvement in aided speech recognition. The N1 and P2 ERPs were recorded to 500 and 3000 Hz tones presented at 65, 75, and 85 dB sound pressure level to either the left or right ear.

View Article and Find Full Text PDF

There exist perceptible differences between sound emanating from a talker who faces and a talker who does not face a listener: Sound from a non-facing talker is attenuated and acquires a spectral tilt. The present study assessed the role that these facing-orientation cues play for speech perception. Digit identification for a frontal target talker in the presence of two spatially separated interfering talkers was measured for 10 normal-hearing (NH) and 11 hearing-impaired (HI) listeners.

View Article and Find Full Text PDF

Auditive and cognitive influences on speech perception in a complex situation were investigated in listeners with normal hearing (NH) and hearing loss (HL). The speech corpus used was the Nonsense-Syllable Response Measure [NSRM; Woods and Kalluri, (2010). International Hearing Aid Research Conference, pp.

View Article and Find Full Text PDF

Aided consonant and vowel identification was measured in 13 listeners with high-frequency sloping hearing losses. To investigate the influence of compression-channel analysis bandwidth on identification performance independent of the number of channels, performance was compared for three 17-channel compression systems that differed only in terms of their channel bandwidths. One compressor had narrow channels, one had widely overlapping channels, and the third had level-dependent channels.

View Article and Find Full Text PDF

Purpose: To summarize existing data on the interactions of cognitive function and hearing technology in older adults.

Method: A narrative review was used to summarize previous data for the short-term interactions of cognition and hearing technology on measured outcomes. For long-term outcomes, typically for 3-24 months of hearing aid use, a computerized database search was conducted.

View Article and Find Full Text PDF

When normal-hearing (NH) listeners compare the loudness of narrowband and wideband sounds presented at identical sound pressure levels, the wideband sound will most often be perceived as louder than the narrowband sound, a phenomenon referred to as loudness summation. Hearing-impaired (HI) listeners typically show less-than-normal loudness summation, due to reduced cochlear compressive gain and degraded frequency selectivity. In the present study, loudness summation at 1 and 3 kHz was estimated monaurally for five NH and eight HI listeners by matching the loudness of narrowband and wideband noise stimuli.

View Article and Find Full Text PDF