Attention Modulates the Auditory Cortical Processing of Spatial and Category Cues in Naturalistic Auditory Scenes.

Front Neurosci

Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht UniversityMaastricht, Netherlands; Maastricht Center for Systems Biology (MaCSBio), Maastricht UniversityMaastricht, Netherlands.

Published: July 2016

This combined fMRI and MEG study investigated brain activations during listening and attending to natural auditory scenes. We first recorded, using in-ear microphones, vocal non-speech sounds, and environmental sounds that were mixed to construct auditory scenes containing two concurrent sound streams. During the brain measurements, subjects attended to one of the streams while spatial acoustic information of the scene was either preserved (stereophonic sounds) or removed (monophonic sounds). Compared to monophonic sounds, stereophonic sounds evoked larger blood-oxygenation-level-dependent (BOLD) fMRI responses in the bilateral posterior superior temporal areas, independent of which stimulus attribute the subject was attending to. This finding is consistent with the functional role of these regions in the (automatic) processing of auditory spatial cues. Additionally, significant differences in the cortical activation patterns depending on the target of attention were observed. Bilateral planum temporale and inferior frontal gyrus were preferentially activated when attending to stereophonic environmental sounds, whereas when subjects attended to stereophonic voice sounds, the BOLD responses were larger at the bilateral middle superior temporal gyrus and sulcus, previously reported to show voice sensitivity. In contrast, the time-resolved MEG responses were stronger for mono- than stereophonic sounds in the bilateral auditory cortices at ~360 ms after the stimulus onset when attending to the voice excerpts within the combined sounds. The observed effects suggest that during the segregation of auditory objects from the auditory background, spatial sound cues together with other relevant temporal and spectral cues are processed in an attention-dependent manner at the cortical locations generally involved in sound recognition. More synchronous neuronal activation during monophonic than stereophonic sound processing, as well as (local) neuronal inhibitory mechanisms in the auditory cortex, may explain the simultaneous increase of BOLD responses and decrease of MEG responses. These findings highlight the complimentary role of electrophysiological and hemodynamic measures in addressing brain processing of complex stimuli.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4894904PMC
http://dx.doi.org/10.3389/fnins.2016.00254DOI Listing

Publication Analysis

Top Keywords

auditory scenes
12
stereophonic sounds
12
sounds
10
auditory
9
environmental sounds
8
subjects attended
8
monophonic sounds
8
superior temporal
8
bold responses
8
meg responses
8

Similar Publications

Attentional Inhibition Ability Predicts Neural Representation During Challenging Auditory Streaming.

Cogn Affect Behav Neurosci

January 2025

Aix Marseille Univ, Inserm, INS, Inst Neurosci Syst, Marseille, France.

Focusing on a single source within a complex auditory scene is challenging. M/EEG-based auditory attention detection (AAD) allows to detect which stream an individual is attending to within a set of multiple concurrent streams. The high interindividual variability in the auditory attention detection performance often is attributed to physiological factors and signal-to-noise ratio of neural data.

View Article and Find Full Text PDF

This study addresses how salience shapes the perceptual organization of an auditory scene. A psychophysical task that was introduced previously by Susini, Jiaouan, Brunet, Houix, and Ponsot [(2020). Sci.

View Article and Find Full Text PDF

Music pre-processing methods are currently becoming a recognized area of research with the goal of making music more accessible to listeners with a hearing impairment. Our previous study showed that hearing-impaired listeners preferred spectrally manipulated multi-track mixes. Nevertheless, the acoustical basis of mixing for hearing-impaired listeners remains poorly understood.

View Article and Find Full Text PDF

Introduction: The ASME (stands for Auditory Stream segregation Multiclass ERP) paradigm is proposed and used for an auditory brain-computer interface (BCI). In this paradigm, a sequence of sounds that are perceived as multiple auditory streams are presented simultaneously, and each stream is an oddball sequence. The users are requested to focus selectively on deviant stimuli in one of the streams, and the target of the user attention is detected by decoding event-related potentials (ERPs).

View Article and Find Full Text PDF

Interaural time differences are often considered a weak cue for stream segregation. We investigated this claim with headphone-presented pure tones differing in a related form of interaural configuration-interaural phase differences (ΔIPD)-or/and in frequency (ΔF). In experiment 1, sequences comprised 5 × ABA- repetitions (A and B = 80-ms tones, "-" = 160-ms silence), and listeners reported whether integration or segregation was heard.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!