There is evidence that for both auditory and visual speech perception, familiarity with the talker facilitates speech recognition. Explanations of these effects have concentrated on the retention of talker information specific to each of these modalities. It could be, however, that some amodal, talker-specific articulatory-style information facilitates speech perception in both modalities. If this is true, then experience with a talker in one modality should facilitate perception of speech from that talker in the other modality. In a test of this prediction, subjects were given about 1 hr of experience lipreading a talker and were then asked to recover speech in noise from either this same talker or a different talker. Results revealed that subjects who lip-read and heard speech from the same talker performed better on the speech-in-noise task than did subjects who lip-read from one talker and then heard speech from a different talker.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1111/j.1467-9280.2007.01911.x | DOI Listing |
Imaging Neurosci (Camb)
April 2024
Department of Electrical Engineering, Columbia University, New York, NY, United States.
Listeners with hearing loss have trouble following a conversation in multitalker environments. While modern hearing aids can generally amplify speech, these devices are unable to tune into a target speaker without first knowing to which speaker a user aims to attend. Brain-controlled hearing aids have been proposed using auditory attention decoding (AAD) methods, but current methods use the same model to compare the speech stimulus and neural response, regardless of the dynamic overlap between talkers which is known to influence neural encoding.
View Article and Find Full Text PDFJ Exp Psychol Hum Percept Perform
January 2025
School of Psychology, University of Sussex.
Human listeners have a remarkable capacity to adapt to severe distortions of the speech signal. Previous work indicates that perceptual learning of degraded speech reflects changes to sublexical representations, though the precise format of these representations has not yet been established. Inspired by the neurophysiology of auditory cortex, we hypothesized that perceptual learning involves changes to perceptual representations that are tuned to acoustic modulations of the speech signal.
View Article and Find Full Text PDFJ Acoust Soc Am
January 2025
USC Viterbi School of Engineering, University of Southern California, Los Angeles, California 90089-1455, USA.
Voice quality serves as a rich source of information about speakers, providing listeners with impressions of identity, emotional state, age, sex, reproductive fitness, and other biologically and socially salient characteristics. Understanding how this information is transmitted, accessed, and exploited requires knowledge of the psychoacoustic dimensions along which voices vary, an area that remains largely unexplored. Recent studies of English speakers have shown that two factors related to speaker size and arousal consistently emerge as the most important determinants of quality, regardless of who is speaking.
View Article and Find Full Text PDFJ Neurosci
January 2025
Department of Psychology, University of Lübeck, Lübeck, Germany.
Amplitude compression is an indispensable feature of contemporary audio production and especially relevant in modern hearing aids. The cortical fate of amplitude-compressed speech signals is not well-studied, however, and may yield undesired side effects: We hypothesize that compressing the amplitude envelope of continuous speech reduces neural tracking. Yet, leveraging such a 'compression side effect' on unwanted, distracting sounds could potentially support attentive listening if effectively reducing their neural tracking.
View Article and Find Full Text PDFJ Speech Lang Hear Res
January 2025
Department of Speech, Language, and Hearing Sciences, The University of Arizona, Tucson.
Purpose: The purpose of this study was to determine if the Vocabulary Acquisition and Usage for Late Talkers (VAULT) intervention could be efficaciously applied to a new treatment target: words a child neither understood nor said. We also assessed whether the type of context variability used to encourage semantic learning (i.e.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!