The multivariate temporal response function (mTRF) is an effective tool for investigating the neural encoding of acoustic and complex linguistic features in natural continuous speech. In this study, we investigated how neural representations of speech features derived from natural stimuli are related to early signs of cognitive decline in older adults, taking into account the effects of hearing. Participants without ( ) and with ( ) early signs of cognitive decline listened to an audiobook while their electroencephalography responses were recorded. Using the mTRF framework, we modeled the relationship between speech input and neural response via different acoustic, segmented and linguistic encoding models and examined the response functions in terms of encoding accuracy, signal power, peak amplitudes and latencies. Our results showed no significant effect of cognitive decline or hearing ability on the neural encoding of acoustic and linguistic speech features. However, we found a significant interaction between hearing ability and the word-level segmentation model, suggesting that hearing impairment specifically affects encoding accuracy for this model, while other features were not affected by hearing ability. These results suggest that while speech processing markers remain unaffected by cognitive decline and hearing loss per se, neural encoding of word-level segmented speech features in older adults is affected by hearing loss but not by cognitive decline. This study emphasises the effectiveness of mTRF analysis in studying the neural encoding of speech and argues for an extension of research to investigate its clinical impact on hearing loss and cognition.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11330478 | PMC |
http://dx.doi.org/10.1038/s41598-024-69602-1 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!