47 results match your criteria: "Centre for the Neural Basis of Hearing[Affiliation]"

The presence of 'giant' synapses in the auditory brainstem is thought to be a specialization designed to encode temporal information to support perception of pitch, frequency, and sound-source localisation. These 'giant' synapses have been found in the ventral cochlear nucleus, the medial nucleus of the trapezoid body and the ventral nucleus of the lateral lemniscus. An interpretation of these synapses as simple relays has, however, been challenged by the observation in the gerbil that the action potential frequently fails in the ventral cochlear nucleus.

View Article and Find Full Text PDF

Dual Coding of Frequency Modulation in the Ventral Cochlear Nucleus.

J Neurosci

April 2018

Centre for the Neural Basis of Hearing, The Physiological Laboratory, Department of Physiology, Development and Neuroscience, University of Cambridge, United Kingdom and

Frequency modulation (FM) is a common acoustic feature of natural sounds and is known to play a role in robust sound source recognition. Auditory neurons show precise stimulus-synchronized discharge patterns that may be used for the representation of low-rate FM. However, it remains unclear whether this representation is based on synchronization to slow temporal envelope (ENV) cues resulting from cochlear filtering or phase locking to faster temporal fine structure (TFS) cues.

View Article and Find Full Text PDF

Neural Segregation of Concurrent Speech: Effects of Background Noise and Reverberation on Auditory Scene Analysis in the Ventral Cochlear Nucleus.

Adv Exp Med Biol

September 2016

Centre for the Neural Basis of Hearing, The Physiological Laboratory, Department of Physiology, Development and Neuroscience, Downing Street, CB2 3EG, Cambridge, UK.

Concurrent complex sounds (e.g., two voices speaking at once) are perceptually disentangled into separate "auditory objects".

View Article and Find Full Text PDF

The Effect of Peripheral Compression on Syllable Perception Measured with a Hearing Impairment Simulator.

Adv Exp Med Biol

September 2016

Department of Physiology, Development and Neuroscience, Centre for the Neural Basis of Hearing, University of Cambridge, Downing Street, CB2 3EG, Cambridge, UK.

Hearing impaired (HI) people often have difficulty understanding speech in multi-speaker or noisy environments. With HI listeners, however, it is often difficult to specify which stage, or stages, of auditory processing are responsible for the deficit. There might also be cognitive problems associated with age.

View Article and Find Full Text PDF

Enhancement of forward suppression begins in the ventral cochlear nucleus.

Brain Res

May 2016

Centre for the Neural Basis of Hearing, Department of Physiology, Development and Neuroscience, University of Cambridge, Downing Street, Cambridge, CB2 3EG, United Kingdom.

Article Synopsis
  • * It compares the neural responses from the VCN with those from the inferior colliculus (ICc) and finds that onset-type neurons demonstrate the most significant suppression, with faster recovery times, while those with sustained discharge show less suppression.
  • * The findings indicate that the suppression observed in the VCN and ICc does not fully match behavioral performance related to forward masking, although onset responders show a wide dynamic range of suppression similar to human psychophysical responses.
View Article and Find Full Text PDF

Objective: This study provides descriptive statistics of the Danish reading span (RS) test for hearing-impaired adults. The combined effect of hearing loss, RS score, and age on speech-in-noise performance in different spatial settings was evaluated in a subset of participants.

Design: Data from published and unpublished studies were re-analysed.

View Article and Find Full Text PDF

Tracking cortical entrainment in neural activity: auditory processes in human temporal cortex.

Front Comput Neurosci

February 2015

Neurolex Group, Department of Psychology, University of Cambridge Cambridge, UK ; MRC Cognition and Brain Sciences Unit Cambridge, UK.

A primary objective for cognitive neuroscience is to identify how features of the sensory environment are encoded in neural activity. Current auditory models of loudness perception can be used to make detailed predictions about the neural activity of the cortex as an individual listens to speech. We used two such models (loudness-sones and loudness-phons), varying in their psychophysiological realism, to predict the instantaneous loudness contours produced by 480 isolated words.

View Article and Find Full Text PDF

Reverberation impairs brainstem temporal representations of voiced vowel sounds: challenging "periodicity-tagged" segregation of competing speech in rooms.

Front Syst Neurosci

January 2015

Centre for the Neural Basis of Hearing, The Physiological Laboratory, Department of Physiology, Development and Neuroscience, University of Cambridge Cambridge, UK.

The auditory system typically processes information from concurrently active sound sources (e.g., two voices speaking at once), in the presence of multiple delayed, attenuated and distorted sound-wave reflections (reverberation).

View Article and Find Full Text PDF

The spike trains generated by short constant-amplitude constant-frequency tone bursts in the ventral cochlear nucleus of the anaesthetised guinea pig are examined. Spikes are grouped according to the order in which they occur following the onset of the stimulus. It is found that successive inter-spike intervals have low statistical dependence according to information-theoretic measures.

View Article and Find Full Text PDF

When a high harmonic is removed from a cosine-phase harmonic complex, we hear a sine tone pop out of the perception; the sine tone has the pitch of the high harmonic, while the tone complex has the pitch of its fundamental frequency, f0. This phenomenon is commonly referred to as Duifhuis Pitch (DP). This paper describes, for the first time, the cortical representation of DP observed with magnetoencephalography.

View Article and Find Full Text PDF

The mutual roles of temporal glimpsing and vocal characteristics in cocktail-party listening.

J Acoust Soc Am

July 2011

Centre for the Neural Basis of Hearing, Department of Physiology, Development and Neuroscience, University of Cambridge, Downing Street, Cambridge CB2 3EG, United Kingdom.

At a cocktail party, listeners must attend selectively to a target speaker and segregate their speech from distracting speech sounds uttered by other speakers. To solve this task, listeners can draw on a variety of vocal, spatial, and temporal cues. Recently, Vestergaard et al.

View Article and Find Full Text PDF

Location and acoustic scale cues in concurrent speech recognition.

J Acoust Soc Am

June 2010

Department of Physiology, Centre for the Neural Basis of Hearing, University of Cambridge, Downing Street, Cambridge CB2 3EG, United Kingdom.

Location and acoustic scale cues have both been shown to have an effect on the recognition of speech in multi-speaker environments. This study examines the interaction of these variables. Subjects were presented with concurrent triplets of syllables from a target voice and a distracting voice, and asked to recognize a specific target syllable.

View Article and Find Full Text PDF

Why are natural sounds detected faster than pips?

J Acoust Soc Am

March 2010

Department of Physiology, Centre for the Neural Basis of Hearing, Downing Street, Cambridge CB2 3EG, United Kingdom.

Simple reaction times (RTs) were used to measure differences in processing time between natural animal sounds and artificial sounds. When the artificial stimuli were sequences of short tone pulses, the animal sounds were detected faster than the artificial sounds. The animal sounds were then compared with acoustically modified versions (white noise modulated by the temporal envelope of the animal sounds).

View Article and Find Full Text PDF

Equivalent-rectangular bandwidth of single units in the anaesthetized guinea-pig ventral cochlear nucleus.

Hear Res

April 2010

Centre for the Neural Basis of Hearing, The Physiological Laboratory, University of Cambridge, CB2 3EG, UK.

Frequency-tuning is a fundamental property of auditory neurons. The filter bandwidth of peripheral auditory neurons determines the frequency resolution of an animal's auditory system. Behavioural studies in animals and humans have defined frequency-tuning in terms of the "equivalent-rectangular bandwidth" (ERB) of peripheral filters.

View Article and Find Full Text PDF

Effects of voicing in the recognition of concurrent syllables.

J Acoust Soc Am

December 2009

Department of Physiology, Development and Neuroscience, Centre for the Neural Basis of Hearing, University of Cambridge, Downing Street, Cambridge CB2 3EG, United Kingdom.

This letter reports a study designed to measure the benefits of voicing in the recognition of concurrent syllables. The target and distracter syllables were either voiced or whispered, producing four combinations of vocal contrast. Results show that listeners use voicing whenever it is present either to detect a target syllable or to reject a distracter.

View Article and Find Full Text PDF

The interaction of vocal characteristics and audibility in the recognition of concurrent syllables.

J Acoust Soc Am

February 2009

Department of Physiology, Centre for the Neural Basis of Hearing, University of Cambridge, Cambridge, United Kingdom.

In concurrent-speech recognition, performance is enhanced when either the glottal pulse rate (GPR) or the vocal tract length (VTL) of the target speaker differs from that of the distracter, but relatively little is known about the trading relationship between the two variables, or how they interact with other cues such as signal-to-noise ratio (SNR). This paper presents a study in which listeners were asked to identify a target syllable in the presence of a distracter syllable, with carefully matched temporal envelopes. The syllables varied in GPR and VTL over a large range, and they were presented at different SNRs.

View Article and Find Full Text PDF

Neural coding of the pitch of complex sounds is vital for animals' ability to communicate and to perceptually organize natural acoustic scenes. Harmonic complex sounds typically have a well defined pitch corresponding to their fundamental frequency, whereas inharmonic sounds can exhibit pitch ambiguity: their pitch can have more than one value. Iterated rippled noise (IRN), a common "pitch stimulus," is generated from broadband noise by a cascade of delay-and-add steps, with the delayed noise phase-shifted by varphi degrees.

View Article and Find Full Text PDF

Accurate neural coding of the pitch of complex sounds is an essential part of auditory scene analysis; differences in pitch help segregate concurrent sounds, while similarities in pitch can help group sounds from a common source. In quiet, nonreverberant backgrounds, pitch can be derived from timing information in broadband high-frequency auditory channels and/or from frequency and timing information carried in narrowband low-frequency auditory channels. Recording from single neurons in the cochlear nucleus of anesthetized guinea pigs, we show that the neural representation of pitch based on timing information is severely degraded in the presence of reverberation.

View Article and Find Full Text PDF

Functional imaging of the auditory processing applied to speech sounds.

Philos Trans R Soc Lond B Biol Sci

March 2008

Centre for the Neural Basis of Hearing, Department of Physiology, Development and Neuroscience, University of Cambridge, Downing Street, Cambridge CB2 3EG, UK.

In this paper, we describe domain-general auditory processes that we believe are prerequisite to the linguistic analysis of speech. We discuss biological evidence for these processes and how they might relate to processes that are specific to human speech and language. We begin with a brief review of (i) the anatomy of the auditory system and (ii) the essential properties of speech sounds.

View Article and Find Full Text PDF

Spike trains were recorded from single units in the ventral cochlear nucleus of the anaesthetised guinea-pig in response to dynamic iterated rippled noise with positive and negative gain. The short-term running waveform autocorrelation functions of these stimuli show peaks at integer multiples of the time-varying delay when the gain is +1, and troughs at odd-integer multiples and peaks at even-integer multiples of the time-varying delay when the gain is -1. In contrast, the short-term autocorrelation of the Hilbert envelope shows peaks at integer multiples of the time-varying delay for both positive and negative gain stimuli.

View Article and Find Full Text PDF

There is increasing evidence that the responses of single units in the mammalian cochlear nucleus can be altered by the presentation of contralateral stimuli, although the functional significance of this binaural responsiveness is unknown. To further our understanding of this phenomenon we recorded single-unit (n = 110) response maps from the cochlear nucleus (ventral and dorsal divisions) of the anaesthetized guinea pig in response to presentation of ipsilateral and contralateral pure tones. Many neurones showed no evidence of input from the contralateral ear (n = 41) but other neurones from both ventral and dorsal cochlear nucleus showed clear evidence of contralateral inhibitory input (n = 61).

View Article and Find Full Text PDF

Perception of acoustic scale and size in musical instrument sounds.

J Acoust Soc Am

October 2006

Centre for the Neural Basis of Hearing, Department of Physiology, Development and Neuroscience, University of Cambridge, Downing Street, Cambridge CB2 3EG UK.

There is size information in natural sounds. For example, as humans grow in height, their vocal tracts increase in length, producing a predictable decrease in the formant frequencies of speech sounds. Recent studies have shown that listeners can make fine discriminations about which of two speakers has the longer vocal tract, supporting the view that the auditory system discriminates changes on the acoustic-scale dimension.

View Article and Find Full Text PDF

The role of chopper units in representing the pitch of complex sounds is unresolved. Traditionally chopper units have been regarded as primarily responding to the stimulus envelope of complex stimuli. This has been supported by the response of chopper units to iterated rippled noise (IRN) as they can provide a robust representation of the delay of IRN with positive gain (+) in their first-order interspike intervals and for some chopper units this representation is relatively level independent.

View Article and Find Full Text PDF

It is commonly assumed that, in the cochlea and the brainstem, the auditory system processes speech sounds without differentiating them from any other sounds. At some stage, however, it must treat speech sounds and nonspeech sounds differently, since we perceive them as different. The purpose of this study was to delimit the first location in the auditory pathway that makes this distinction using functional MRI, by identifying regions that are differentially sensitive to the internal structure of speech sounds as opposed to closely matched control sounds.

View Article and Find Full Text PDF

Auditory-nerve first-spike latency and auditory absolute threshold: a computer model.

J Acoust Soc Am

January 2006

Centre for the Neural Basis of Hearing at Essex, Department of Psychology, University of Essex, Colchester CO4 3SQ, United Kingdom.

A computer model of the auditory periphery was used to address the question of what constitutes the physiological substrate of absolute auditory threshold. The model was first evaluated to show that it is consistent with experimental findings that auditory-nerve fiber spikes can be predicted to occur when the running integral of stimulus pressure reaches some critical value [P. Heil and H.

View Article and Find Full Text PDF