Studies of auditory scene analysis have traditionally relied on paradigms using artificial sounds-and conventional behavioral techniques-to elucidate how we perceptually segregate auditory objects or streams from each other. In the past few decades, however, there has been growing interest in uncovering the neural underpinnings of auditory segregation using human and animal neuroscience techniques, as well as computational modeling. This largely reflects the growth in the fields of cognitive neuroscience and computational neuroscience and has led to new theories of how the auditory system segregates sounds in complex arrays. The current review focuses on neural and computational studies of auditory scene perception published in the last few years. Following the progress that has been made in these studies, we describe (1) theoretical advances in our understanding of the most well-studied aspects of auditory scene perception, namely segregation of sequential patterns of sounds and concurrently presented sounds; (2) the diversification of topics and paradigms that have been investigated; and (3) how new neuroscience techniques (including invasive neurophysiology in awake humans, genotyping, and brain stimulation) have been used in this field.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5446279 | PMC |
http://dx.doi.org/10.1111/nyas.13317 | DOI Listing |
PLoS One
January 2025
Department of Otolaryngology-Head and Neck Surgery, Icahn School of Medicine at Mount Sinai, New York, NY, United States of America.
Objective: What we hear may influence postural control, particularly in people with vestibular hypofunction. Would hearing a moving subway destabilize people similarly to seeing the train move? We investigated how people with unilateral vestibular hypofunction and healthy controls incorporated broadband and real-recorded sounds with visual load for balance in an immersive contextual scene.
Design: Participants stood on foam placed on a force-platform, wore the HTC Vive headset, and observed an immersive subway environment.
Neurophotonics
January 2025
Washington University School of Medicine, Mallinckrodt Institute of Radiology, St. Louis, Missouri, United States.
Significance: Decoding naturalistic content from brain activity has important neuroscience and clinical implications. Information about visual scenes and intelligible speech has been decoded from cortical activity using functional magnetic resonance imaging (fMRI) and electrocorticography, but widespread applications are limited by the logistics of these technologies.
Aim: High-density diffuse optical tomography (HD-DOT) offers image quality approaching that of fMRI but with the silent, open scanning environment afforded by optical methods, thus opening the door to more naturalistic research and applications.
Sci Rep
January 2025
Department of Psychology, New York University, New York, NY, USA.
Music can evoke powerful emotions in listeners. However, the role that instrumental music (music without any vocal part) plays in conveying extra-musical meaning, above and beyond emotions, is still a debated question. We conducted a study wherein participants (N = 121) listened to twenty 15-second-long excerpts of polyphonic instrumental soundtrack music and reported (i) perceived emotions (e.
View Article and Find Full Text PDFCogn Affect Behav Neurosci
January 2025
Aix Marseille Univ, Inserm, INS, Inst Neurosci Syst, Marseille, France.
Focusing on a single source within a complex auditory scene is challenging. M/EEG-based auditory attention detection (AAD) allows to detect which stream an individual is attending to within a set of multiple concurrent streams. The high interindividual variability in the auditory attention detection performance often is attributed to physiological factors and signal-to-noise ratio of neural data.
View Article and Find Full Text PDFJASA Express Lett
January 2025
STMS, IRCAM, Sorbonne Université, CNRS, Ministère de la Culture, 75004 Paris,
This study addresses how salience shapes the perceptual organization of an auditory scene. A psychophysical task that was introduced previously by Susini, Jiaouan, Brunet, Houix, and Ponsot [(2020). Sci.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!