We investigated how the aging brain copes with acoustic and syntactic challenges during spoken language comprehension. Thirty-eight healthy adults aged 54 - 80 years ( = 66 years) participated in an fMRI experiment wherein listeners indicated the gender of an agent in short spoken sentences that varied in syntactic complexity (object-relative vs subject-relative center-embedded clause structures) and acoustic richness (high vs low spectral detail, but all intelligible). We found widespread activity throughout a bilateral frontotemporal network during successful sentence comprehension.
View Article and Find Full Text PDFThe potential negative impact of head movement during fMRI has long been appreciated. Although a variety of prospective and retrospective approaches have been developed to help mitigate these effects, reducing head movement in the first place remains the most appealing strategy for optimizing data quality. Real-time interventions, in which participants are provided feedback regarding their scan-to-scan motion, have recently shown promise in reducing motion during resting state fMRI.
View Article and Find Full Text PDFIn everyday language processing, sentence context affects how readers and listeners process upcoming words. In experimental situations, it can be useful to identify words that are predicted to greater or lesser degrees by the preceding context. Here we report completion norms for 3085 English sentences, collected online using a written cloze procedure in which participants were asked to provide their best guess for the word completing a sentence.
View Article and Find Full Text PDFNeurobiol Lang (Camb)
October 2020
Understanding spoken words requires the rapid matching of a complex acoustic stimulus with stored lexical representations. The degree to which brain networks supporting spoken word recognition are affected by adult aging remains poorly understood. In the current study we used fMRI to measure the brain responses to spoken words in two conditions: an attentive listening condition, in which no response was required, and a repetition task.
View Article and Find Full Text PDFAuditory attention is critical for selectively listening to speech from a single talker in a multitalker environment (e.g., Cherry, 1953).
View Article and Find Full Text PDFPurpose: The goal of this study was to determine how background noise, linguistic properties of spoken sentences, and listener abilities (hearing sensitivity and verbal working memory) affect cognitive demand during auditory sentence comprehension.
Method: We tested 30 young adults and 30 older adults. Participants heard lists of sentences in quiet and in 8-talker babble at signal-to-noise ratios of +15 dB and +5 dB, which increased acoustic challenge but left the speech largely intelligible.
Auditory selective attention makes it possible to pick out one speech stream that is embedded in a multispeaker environment. We adapted a cued dichotic listening task to examine suppression of a speech stream lateralized to the nonattended ear, and to evaluate the effects of attention on the right ear's well-known advantage in the perception of linguistic stimuli. After being cued to attend to input from either their left or right ear, participants heard two different four-word streams presented simultaneously to the separate ears.
View Article and Find Full Text PDFPsychon Bull Rev
August 2017
Contextual and sensory information are combined in speech perception. Conflict between the two can lead to false hearing, defined as a high-confidence misidentification of a spoken word. Rogers, Jacoby, and Sommers (Psychology and Aging, 27(1), 33-45, 2012) found that older adults are more susceptible to false hearing than are young adults, using a combination of semantic priming and repetition priming to create context.
View Article and Find Full Text PDFBackground/study Context: A common goal during speech comprehension is to remember what we have heard. Encoding speech into long-term memory frequently requires processes such as verbal working memory that may also be involved in processing degraded speech. Here the authors tested whether young and older adult listeners' memory for short stories was worse when the stories were acoustically degraded, or whether the additional contextual support provided by a narrative would protect against these effects.
View Article and Find Full Text PDFJ Acoust Soc Am
July 2015
Older adults' normally adaptive use of semantic context to aid in word recognition can have a negative consequence of causing misrecognitions, especially when the word actually spoken sounds similar to a word that more closely fits the context. Word-pairs were presented to young and older adults, with the second word of the pair masked by multi-talker babble varying in signal-to-noise ratio. Results confirmed older adults' greater tendency to misidentify words based on their semantic context compared to the young adults, and to do so with a higher level of confidence.
View Article and Find Full Text PDFUnlabelled: BACKGROUND/STUDY CONTEXT: Older adults, especially those with reduced hearing acuity, can make good use of linguistic context in word recognition. Less is known about the effects of the weighted distribution of probable target and nontarget words that fit the sentence context (response entropy). The present study examined the effects of age, hearing acuity, linguistic context, and response entropy on spoken word recognition.
View Article and Find Full Text PDFIn two experiments testing age differences in the subjective experience of listening, which we call meta-audition, young and older adults were first trained to learn pairs of semantic associates. Following training, both groups were tested on identification of words presented in noise, with the critical manipulation being whether the target item was congruent, incongruent, or neutral with respect to prior training. Results of both experiments revealed that older adults compared to young adults were more prone to "false hearing," defined as mistaken high confidence in the accuracy of perception when a spoken word had been misperceived.
View Article and Find Full Text PDFResults of three experiments revealed that older, as compared to young, adults are more reliant on context when "seeing" a briefly flashed word that was preceded by a prime. In a congruent condition, the prime was the same word as flashed (e.g.
View Article and Find Full Text PDFResults from two experiments revealed that prior experience with proactive interference (PI) diminished PI's effects for both young and older adults. Participants were given two rounds of experience, with different materials, in a situation that produced PI. Comparisons with a control condition showed that the effects of PI on accuracy and on high-confidence intrusion errors (false memory) were reduced on the second round, as compared with those on the first.
View Article and Find Full Text PDF