Hemispheric asymmetries for music and speech: Spectrotemporal modulations and top-down influences.

Front Neurosci

International Laboratory for Brain, Music, and Sound Research, Montreal Neurological Institute, McGill University, Montreal, QC, Canada.

Published: December 2022

AI Article Synopsis

  • Hemispheric differences in how we process sound, particularly speech and music, have been studied but not fully understood.
  • Evidence suggests that the right side of the auditory network is more involved in music, especially pitch patterns, while the left side focuses on temporal aspects, like timing.
  • This knowledge fits into a broader framework that explains how our brains process complex sounds, but learning and attention might also change these auditory asymmetries, indicating an interplay between innate brain functions and higher cognitive processes.

Article Abstract

Hemispheric asymmetries in auditory cognition have been recognized for a long time, but their neural basis is still debated. Here I focus on specialization for processing of speech and music, the two most important auditory communication systems that humans possess. A great deal of evidence from lesion studies and functional imaging suggests that aspects of music linked to the processing of pitch patterns depend more on right than left auditory networks. A complementary specialization for temporal resolution has been suggested for left auditory networks. These diverse findings can be integrated within the context of the spectrotemporal modulation framework, which has been developed as a way to characterize efficient neuronal encoding of complex sounds. Recent studies show that degradation of spectral modulation impairs melody perception but not speech content, whereas degradation of temporal modulation has the opposite effect. Neural responses in the right and left auditory cortex in those studies are linked to processing of spectral and temporal modulations, respectively. These findings provide a unifying model to understand asymmetries in terms of sensitivity to acoustical features of communication sounds in humans. However, this explanation does not account for evidence that asymmetries can shift as a function of learning, attention, or other top-down factors. Therefore, it seems likely that asymmetries arise both from bottom-up specialization for acoustical modulations and top-down influences coming from hierarchically higher components of the system. Such interactions can be understood in terms of predictive coding mechanisms for perception.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC9809288PMC
http://dx.doi.org/10.3389/fnins.2022.1075511DOI Listing

Publication Analysis

Top Keywords

left auditory
12
hemispheric asymmetries
8
modulations top-down
8
top-down influences
8
linked processing
8
auditory networks
8
auditory
5
asymmetries music
4
music speech
4
speech spectrotemporal
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!