Many older adults live with some form of hearing loss and have difficulty understanding speech in the presence of background sound. Experiences resulting from such difficulties include increased listening effort and fatigue. Social interactions may become less appealing in the context of such experiences, and age-related hearing loss is associated with an increased risk of social isolation and associated negative psychosocial health outcomes.
View Article and Find Full Text PDFListening environments contain background sounds that mask speech and lead to communication challenges. Sensitivity to slow acoustic fluctuations in speech can help segregate speech from background noise. Semantic context can also facilitate speech perception in noise, for example, by enabling prediction of upcoming words.
View Article and Find Full Text PDFSpeech is more intelligible when it is spoken by familiar than unfamiliar people. If this benefit arises because key voice characteristics like perceptual correlates of fundamental frequency or vocal tract length (VTL) are more accurately represented for familiar voices, listeners may be able to discriminate smaller manipulations to such characteristics for familiar than unfamiliar voices. We measured participants' (N = 17) thresholds for discriminating pitch (correlate of fundamental frequency, or glottal pulse rate) and formant spacing (correlate of VTL; 'VTL-timbre') for voices that were familiar (participants' friends) and unfamiliar (other participants' friends).
View Article and Find Full Text PDFListening in everyday life requires attention to be deployed dynamically - when listening is expected to be difficult and when relevant information is expected to occur - to conserve mental resources. Conserving mental resources may be particularly important for older adults who often experience difficulties understanding speech. In the current study, we use electro- and magnetoencephalography to investigate the neural and behavioral mechanics of attention regulation during listening and the effects that aging has on these.
View Article and Find Full Text PDFPerception of speech requires sensitivity to features, such as amplitude and frequency modulations, that are often temporally regular. Previous work suggests age-related changes in neural responses to temporally regular features, but little work has focused on age differences for different types of modulations. We recorded magnetoencephalography in younger (21-33 years) and older adults (53-73 years) to investigate age differences in neural responses to slow (2-6 Hz sinusoidal and non-sinusoidal) modulations in amplitude, frequency, or combined amplitude and frequency.
View Article and Find Full Text PDFPitch discrimination is better for complex tones than pure tones, but how pitch discrimination differs between natural and artificial sounds is not fully understood. This study compared pitch discrimination thresholds for flat-spectrum harmonic complex tones with those for natural sounds played by musical instruments of three different timbres (violin, trumpet, and flute). To investigate whether natural familiarity with sounds of particular timbres affects pitch discrimination thresholds, this study recruited non-musicians and musicians who were trained on one of the three instruments.
View Article and Find Full Text PDFSpeech is often degraded by environmental noise or hearing impairment. People can compensate for degradation, but this requires cognitive effort. Previous research has identified frontotemporal networks involved in effortful perception, but materials in these works were also less intelligible, and so it is not clear whether activity reflected effort or intelligibility differences.
View Article and Find Full Text PDFFluctuating background sounds facilitate speech intelligibility by providing speech 'glimpses' (masking release). Older adults benefit less from glimpses, but masking release is typically investigated using isolated sentences. Recent work indicates that using engaging, continuous speech materials (e.
View Article and Find Full Text PDFOlder people with hearing problems often experience difficulties understanding speech in the presence of background sound. As a result, they may disengage in social situations, which has been associated with negative psychosocial health outcomes. Measuring listening (dis)engagement during challenging listening situations has received little attention thus far.
View Article and Find Full Text PDFOptimal perception requires adaptation to sounds in the environment. Adaptation involves representing the acoustic stimulation history in neural response patterns, for example, by altering response magnitude or latency as sound-level context changes. Neurons in the auditory brainstem of rodents are sensitive to acoustic stimulation history and sound-level context (often referred to as sensitivity to stimulus statistics), but the degree to which the human brainstem exhibits such neural adaptation is unclear.
View Article and Find Full Text PDFMost listeners have an implicit understanding of the rules that govern how music unfolds over time. This knowledge is acquired in part through statistical learning, a robust learning mechanism that allows individuals to extract regularities from the environment. However, it is presently unclear how this prior musical knowledge might facilitate or interfere with the learning of novel tone sequences that do not conform to familiar musical rules.
View Article and Find Full Text PDFSensitivity to repetitions in sound amplitude and frequency is crucial for sound perception. As with other aspects of sound processing, sensitivity to such patterns may change with age, and may help explain some age-related changes in hearing such as segregating speech from background sound. We recorded magnetoencephalography to characterize differences in the processing of sound patterns between younger and older adults.
View Article and Find Full Text PDFRepeating structures forming regular patterns are common in sounds. Learning such patterns may enable accurate perceptual organization. In five experiments, we investigated the behavioral and neural signatures of rapid perceptual learning of regular sound patterns.
View Article and Find Full Text PDFWhen people listen to speech in noisy places, they can understand more words spoken by someone familiar, such as a friend or partner, than someone unfamiliar. Yet we know little about how voice familiarity develops over time. We exposed participants ( = 50) to three voices for different lengths of time (speaking 88, 166, or 478 sentences during familiarization and training).
View Article and Find Full Text PDFWhen speech is masked by competing sound, people are better at understanding what is said if the talker is familiar compared to unfamiliar. The benefit is robust, but how does processing of familiar voices facilitate intelligibility? We combined high-resolution fMRI with representational similarity analysis to quantify the difference in distributed activity between clear and masked speech. We demonstrate that brain representations of spoken sentences are less affected by a competing sentence when they are spoken by a friend or partner than by someone unfamiliar-effectively, showing a cortical signal-to-noise ratio (SNR) enhancement for familiar voices.
View Article and Find Full Text PDFMany older listeners have difficulty understanding speech in noise, when cues to speech-sound identity are less redundant. The amplitude envelope of speech fluctuates dramatically over time, and features such as the rate of amplitude change at onsets (attack) and offsets (decay), signal critical information about the identity of speech sounds. Aging is also thought to be accompanied by increases in cortical excitability, which may differentially alter sensitivity to envelope dynamics.
View Article and Find Full Text PDFIt is well established that movement planning recruits motor-related cortical brain areas in preparation for the forthcoming action. Given that an integral component to the control of action is the processing of sensory information throughout movement, we predicted that movement planning might also modulate early sensory cortical areas, readying them for sensory processing during the unfolding action. To test this hypothesis, we performed 2 human functional magnetic resonance imaging studies involving separate delayed movement tasks and focused on premovement neural activity in early auditory cortex, given the area's direct connections to the motor system and evidence that it is modulated by motor cortex during movement in rodents.
View Article and Find Full Text PDFComprehension of speech masked by background sound requires increased cognitive processing, which makes listening effortful. Research in hearing has focused on such challenging listening experiences, in part because they are thought to contribute to social withdrawal in people with hearing impairment. Research has focused less on positive listening experiences, such as enjoyment, despite their potential importance in motivating effortful listening.
View Article and Find Full Text PDFSpeech comprehension is challenged by background noise, acoustic interference, and linguistic factors, such as the presence of words with more than one meaning (homonyms and homophones). Previous work suggests that homophony in spoken language increases cognitive demand. Here, we measured pupil dilation-a physiological index of cognitive demand-while listeners heard sentences, containing words with more than one meaning, or well-matched sentences without ambiguous words.
View Article and Find Full Text PDFHearing loss is associated with changes at the peripheral, subcortical, and cortical auditory stages. Research often focuses on these stages in isolation, but peripheral damage has cascading effects on central processing, and different stages are interconnected through extensive feedforward and feedback projections. Accordingly, assessment of the entire auditory system is needed to understand auditory pathology.
View Article and Find Full Text PDFHearing impairment in older adulthood puts people at risk of communication difficulties, disengagement from listening, and social withdrawal. Here, we develop a model of listening engagement (MoLE) that provides a conceptual foundation to understand when people engage in listening and why some people disengage. We use the term "listening engagement" to describe the recruitment of executive and other cognitive resources in the service of a valued communication goal.
View Article and Find Full Text PDFSensitivity to sound-level statistics is crucial for optimal perception, but research has focused mostly on neurophysiological recordings, whereas behavioral evidence is sparse. We use electroencephalography (EEG) and behavioral methods to investigate how sound-level statistics affect neural activity and the detection of near-threshold changes in sound amplitude. We presented noise bursts with sound levels drawn from distributions with either a low or a high modal sound level.
View Article and Find Full Text PDFJ Exp Psychol Learn Mem Cogn
August 2020
Understanding speech in adverse conditions is affected by experience-a familiar voice is substantially more intelligible than an unfamiliar voice when competing speech is present, even if the content of the speech (the words) are controlled. This familiar-voice benefit is observed consistently, but its underpinnings are unclear: Do familiar voices simply attract more attention, are they inherently more intelligible because they have predictable acoustic characteristics, or are they more intelligible in a mixture because they are more resistant to interference from other sounds? We recruited pairs of native English-speaking participants who were friends or romantic couples. Participants reported words from closed-set English sentences (i.
View Article and Find Full Text PDF