Publications by authors named "Summerfield Q"

Objective: The Toy Discrimination Test measures children's ability to discriminate spoken words. Previous assessments of reliability tested children with normal hearing or mild hearing impairment, and most studies used a version of the test without a masking sound. We assessed test-retest reliability for children with hearing impairment using maskers of broadband noise and two-talker babble.

View Article and Find Full Text PDF

To explore the neural processes underlying concurrent sound segregation, auditory evoked fields (AEFs) were measured using magnetoencephalography (MEG). To induce the segregation of two auditory objects we manipulated harmonicity and onset synchrony. Participants were presented with complex sounds with (i) all harmonics in-tune (ii) the third harmonic mistuned by 8% of its original value (iii) the onset of the third harmonic delayed by 160 ms compared to the other harmonics.

View Article and Find Full Text PDF

Objective: Analysis of the cost implications and reasons for nonuse of cochlear implants in an established cochlear implant unit.

Study Design: Clinical data were analyzed retrospectively to construct a table of cochlear implant use over time to identify nonuse and to suggest the reasons for this.

Setting: Yorkshire Cochlear Implant Service is a tertiary referral center.

View Article and Find Full Text PDF

Utility scores were estimated for 609 hearing-impaired adults who completed EQ-5D, Health Utilities Index Mark III (HUI3) and SF-6D survey instruments both before and after being provided with a hearing aid. Pre-intervention, the mean utility scores for EQ-5D (0.80) and SF-6D (0.

View Article and Find Full Text PDF

Two experiments investigated the effect of frequency modulation on the identification of vowel sounds presented concurrently with interfering vowels. In experiment 1, identification thresholds were measured for each of five target vowels, masked, in each trial, by one of ten masking vowels. Both target and masking vowels were synthesized using harmonically spaced frequency components.

View Article and Find Full Text PDF

Three experiments and a computational model explored the role of within-channel and across-channel processes in the perceptual separation of competing, complex, broadband sounds which differed in their interaural phase spectra. In each experiment, two competing vowels, whose first and second formants were represented by two discrete bands of noise, were presented concurrently, for identification. Experiments 1 and 2 showed that listeners were able to identify the vowels accurately when each was presented to a different ear, but were unable to identify the vowels when they were presented with different interaural time delays (ITDs); i.

View Article and Find Full Text PDF

A form of auditory "enhancement" can be demonstrated by omitting a component from a harmonic series for a few hundred milliseconds and then replacing it: the replaced component stands out perceptually. Psychophysical experiments have shown that components generate more forward masking when enhanced than when present but not enhanced. This result has been interpreted as demonstrating that enhancement involves an increase in gain in the frequency region of the replaced component.

View Article and Find Full Text PDF

In four experiments we investigated whether listeners can locate the formants of vowels not only from peaks, but also from spectral "shoulders"--features that give rise to zero crossings in the third, but not the first, differential of the excitation pattern--as hypothesized by Assmann and Summerfield (1989). Stimuli were steady-state approximations to the vowels [a, i, e, u, o] created by summing the first 45 harmonics of a fundamental of 100 Hz. Thirty-nine harmonics had equal amplitudes; the other 6 formed three pairs that were raised in level to define three "formants.

View Article and Find Full Text PDF

The IHR-McCormick Automated Toy Discrimination Test (ATT) measures the minimum sound level at which a child can identify words presented in quiet in the sound field. This 'word-discrimination threshold' provides a direct measure of the ease with which a child can identify speech and a surrogate measure of auditory sensitivity. This paper describes steps taken to maximize the test-retest reliability of the ATT and to enable it to measure word-discrimination thresholds in noise as well as in quiet.

View Article and Find Full Text PDF

Models of the auditory and phonetic analysis of speech must account for the ability of listeners to extract information from speech when competing voices are present. When two synthetic vowels are presented simultaneously and monaurally, listeners can exploit cues provided by a difference in fundamental frequency (F0) between the vowels to help determine their phonemic identities. Three experiments examined the effects of stimulus duration on the perception of such "double vowels.

View Article and Find Full Text PDF

Four experiments sought evidence that listeners can use coherent changes in the frequency or amplitude of harmonics to segregate concurrent vowels. Segregation was not helped by giving the harmonics of competing vowels different patterns of frequency or amplitude modulation. However, modulating the frequencies of the components of one vowel was beneficial when the other vowel was not modulated, provided that both vowels were composed of components placed randomly in frequency.

View Article and Find Full Text PDF
Lipreading and audio-visual speech perception.

Philos Trans R Soc Lond B Biol Sci

January 1992

This paper reviews progress in understanding the psychology of lipreading and audio-visual speech perception. It considers four questions. What distinguishes better from poorer lipreaders? What are the effects of introducing a delay between the acoustical and optical speech signals? What have attempts to produce computer animations of talking faces contributed to our understanding of the visual cues that distinguish consonants and vowels? Finally, how should the process of audio-visual integration in speech perception be described; that is, how are the sights and sounds of talking faces represented at their conflux?

View Article and Find Full Text PDF

Procedures for enhancing the intelligibility of a target talker in the presence of a co-channel competing talker were evaluated in tests involving (i) continuously voiced sentences spoken on a monotone, (ii) continuously voiced sentences with time-varying intonation, and (iii) noncontinuously voiced sentences produced with natural intonation. The procedures were based on the methods of harmonic selection and cepstral filtering [R.J.

View Article and Find Full Text PDF

Three experiments examined the ability of listeners to identify steady-state synthetic vowel-like sounds presented concurrently in pairs to the same ear. Experiment 1 confirmed earlier reports that listeners identify the constituents of such pairs more accurately when they differ in fundamental frequency (f0) by about a half semitone or more, compared to the condition where they have the same f0. When the constituents have different f0's, corresponding harmonics of the two vowels are misaligned in frequency and corresponding pitch periods are asynchronous in time.

View Article and Find Full Text PDF

If two vowels with different fundamental frequencies (fo's) are presented simultaneously and monaurally, listeners often hear two talkers producing different vowels on different pitches. This paper describes the evaluation of four computational models of the auditory and perceptual processes which may underlie this ability. Each model involves four stages: (i) frequency analysis using an "auditory" filter bank, (ii) determination of the pitches present in the stimulus, (iii) segregation of the competing speech sources by grouping energy associated with each pitch to create two derived spectral patterns, and (iv) classification of the derived spectral patterns to predict the probabilities of listeners' vowel-identification responses.

View Article and Find Full Text PDF

The strategy for measuring speech-reception thresholds for sentences in noise advocated by Plomp and Mimpen (Audiology, 18, 43-52, 1979) was modified to create a reliable test for measuring the difficulty which listeners have in speech reception, both auditorily and audio-visually. The test materials consist of 10 lists of 15 short sentences of homogeneous intelligibility when presented acoustically, and of different, but still homogeneous, intelligibility when presented audio-visually, in white noise. Homogeneity was achieved by applying phonetic and linguistic principles at the stage of compilation, followed by pilot testing and balancing of properties.

View Article and Find Full Text PDF

Two signal-processing procedures for separating the continuously-voiced speech of competing talkers are described and evaluated. With competing sentences, each spoken on a monotone, the procedures improved the intelligibility of the target talker both for listeners with normal hearing and for listeners with moderate-to-severe hearing losses of cochlear origin. However, with intoned sentences, benefits were smaller for normal-hearing listeners and were inconsistent for impaired listeners.

View Article and Find Full Text PDF

Two signal-processing procedures for separating the continuously-voiced speech of competing talkers are described and evaluated. With competing sentences, each spoken on a monotone, the procedures improved the intelligibility of the target talker both for listeners with normal hearing and for listeners with moderate-to-severe hearing losses of cochlear origin. However, with intoned sentences, benefits were smaller for normal-hearing listeners and were inconsistent for impaired listeners.

View Article and Find Full Text PDF

Two signal-processing algorithms, derived from those described by Stubbs and Summerfield [R.J. Stubbs and Q.

View Article and Find Full Text PDF

Listeners identified both constituents of double vowels created by summing the waveforms of pairs of synthetic vowels with the same duration and fundamental frequency. Accuracy of identification was significantly above chance. Effects of introducing such double vowels by visual or acoustical precursor stimuli were examined.

View Article and Find Full Text PDF

The ability of listeners to identify pairs of simultaneous synthetic vowels has been investigated in the first of a series of studies on the extraction of phonetic information from multiple-talker waveforms. Both members of the vowel pair had the same onset and offset times and a constant fundamental frequency of 100 Hz. Listeners identified both vowels with an accuracy significantly greater than chance.

View Article and Find Full Text PDF

Two signal-processing algorithms, designed to separate the voiced speech of two talkers speaking simultaneously at similar intensities in a single channel, were compared and evaluated. Both algorithms exploit the harmonic structure of voiced speech and require a difference in fundamental frequency (F0) between the voices to operate successfully. One attenuates the interfering voice by filtering the cepstrum of the combined signal.

View Article and Find Full Text PDF

The intelligibility of sentences presented in noise improves when the listener can view the talker's face. Our aims were to quantify this benefit, and to relate it to individual differences among subjects in lipreading ability and among sentences in lipreading difficulty. Auditory and audiovisual speech-reception thresholds (SRTs) were measured in 20 listeners with normal hearing.

View Article and Find Full Text PDF