Harmonic Cancellation-A Fundamental of Auditory Scene Analysis.

Trends Hear

Laboratoire des systèmes perceptifs27051, CNRS, Paris, France.

Published: April 2022

This paper reviews the hypothesis of according to which an interfering sound is suppressed or canceled on the basis of its harmonicity (or periodicity in the time domain) for the purpose of Auditory Scene Analysis. It defines the concept, discusses theoretical arguments in its favor, and reviews experimental results that support it, or not. If correct, the hypothesis may draw on time-domain processing of temporally accurate neural representations within the brainstem, as required also by the classic equalization-cancellation model of binaural unmasking. The hypothesis predicts that a target sound corrupted by interference will be easier to hear if the interference is harmonic than inharmonic, all else being equal. This prediction is borne out in a number of behavioral studies, but not all. The paper reviews those results, with the aim to understand the inconsistencies and come up with a reliable conclusion for, or against, the hypothesis of harmonic cancellation within the auditory system.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8552394PMC
http://dx.doi.org/10.1177/23312165211041422DOI Listing

Publication Analysis

Top Keywords

auditory scene
8
scene analysis
8
paper reviews
8
harmonic cancellation-a
4
cancellation-a fundamental
4
fundamental auditory
4
analysis paper
4
hypothesis
4
reviews hypothesis
4
hypothesis interfering
4

Similar Publications

Introduction: The ASME (stands for Auditory Stream segregation Multiclass ERP) paradigm is proposed and used for an auditory brain-computer interface (BCI). In this paradigm, a sequence of sounds that are perceived as multiple auditory streams are presented simultaneously, and each stream is an oddball sequence. The users are requested to focus selectively on deviant stimuli in one of the streams, and the target of the user attention is detected by decoding event-related potentials (ERPs).

View Article and Find Full Text PDF

Interaural time differences are often considered a weak cue for stream segregation. We investigated this claim with headphone-presented pure tones differing in a related form of interaural configuration-interaural phase differences (ΔIPD)-or/and in frequency (ΔF). In experiment 1, sequences comprised 5 × ABA- repetitions (A and B = 80-ms tones, "-" = 160-ms silence), and listeners reported whether integration or segregation was heard.

View Article and Find Full Text PDF

Introduction: We currently lack speech testing materials faithful to broader aspects of real-world auditory scenes such as speech directivity and extended high frequency (EHF; > 8 kHz) content that have demonstrable effects on speech perception. Here, we describe the development of a multidirectional, high-fidelity speech corpus using multichannel anechoic recordings that can be used for future studies of speech perception in complex environments by diverse listeners.

Design: Fifteen male and 15 female talkers (21.

View Article and Find Full Text PDF

Understanding how early scene viewing is guided can reveal fundamental brain mechanisms for quickly making sense of our surroundings. Viewing is often initiated from the left side. Across two experiments, we focused on search initiation for lateralised targets within real-world scenes, investigating the role of the cerebral hemispheres in guiding the first saccade.

View Article and Find Full Text PDF

Hearing impairment alters the sound input received by the human auditory system, reducing speech comprehension in noisy multi-talker auditory scenes. Despite such difficulties, neural signals were shown to encode the attended speech envelope more reliably than the envelope of ignored sounds, reflecting the intention of listeners with hearing impairment (HI). This result raises an important question: What speech-processing stage could reflect the difficulty in attentional selection, if not envelope tracking? Here, we use scalp electroencephalography (EEG) to test the hypothesis that the neural encoding of phonological information (i.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!