The gammatone filter was imported from auditory physiology to provide a time-domain version of the roex auditory filter and enable the development of a realistic auditory filterbank for models of auditory perception [Patterson et al., J. Acoust. Soc. Am. 98, 1890-1894 (1995)]. The gammachirp auditory filter was developed to extend the domain of the gammatone auditory filter and simulate the changes in filter shape that occur with changes in stimulus level. Initially, the gammachirp filter was limited to center frequencies in the 2.0-kHz region where there were sufficient "notched-noise" masking data to define its parameters accurately. Recently, however, the range of the masking data has been extended in two massive studies. This paper reports how a compressive version of the gammachirp auditory filter was fitted to these new data sets to define the filter parameters over the extended frequency range. The results show that the shape of the filter can be specified for the entire domain of the data using just six constants (center frequencies from 0.25 to 6.0 kHz and levels from 30 to 80 dB SPL). The compressive, gammachirp auditory filter also has the advantage of being consistent with physiological studies of cochlear filtering insofar as the compression of the filter is mainly limited to the passband and the form of the chirp in the impulse response is largely independent of level.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1121/1.1600720 | DOI Listing |
J Acoust Soc Am
January 2025
Department of Electronics Engineering, Pusan National University, Busan, South Korea.
The amount of information contained in speech signals is a fundamental concern of speech-based technologies and is particularly relevant in speech perception. Measuring the mutual information of actual speech signals is non-trivial, and quantitative measurements have not been extensively conducted to date. Recent advancements in machine learning have made it possible to directly measure mutual information using data.
View Article and Find Full Text PDFFront Hum Neurosci
January 2025
Center for Ear-EEG, Department of Electrical and Computer Engineering, Aarhus University, Aarhus, Denmark.
The recent progress in auditory attention decoding (AAD) methods is based on algorithms that find a relation between the audio envelope and the neurophysiological response. The most popular approach is based on the reconstruction of the audio envelope from electroencephalogram (EEG) signals. These methods are primarily based on the exogenous response driven by the physical characteristics of the stimuli.
View Article and Find Full Text PDFFront Child Adolesc Psychiatry
August 2024
Department of Occupational Therapy Sciences, Nagasaki University Graduate School of Biomedical Sciences, Nagasaki, Japan.
Background: Restricted and repetitive behavior (RRB) is a core symptom of autism spectrum disorder (ASD). The structure of RRB subcategories and their relationship with atypical sensory processing in Japan are not well understood. This study examined subcategories of the RRB in Japanese children with ASD and explored their relationship with sensory processing.
View Article and Find Full Text PDFPLoS One
January 2025
Department of Psychology, University of British Columbia, Vancouver, BC, Canada.
The built environments we move through are a filter for the stimuli we experience. If we are in a darker or a lighter room or space, a neutrally valenced sound could be perceived as more unpleasant or more pleasant. Past research suggests a role for the layout and lighting of a space in impacting how stimuli are rated, especially on bipolar valence scales.
View Article and Find Full Text PDFEar Hear
December 2024
Center for Hearing Research, Boys Town National Research Hospital, Omaha, Nebraska, USA.
Objectives: To investigate the influence of frequency-specific audibility on audiovisual benefit in children, this study examined the impact of high- and low-pass acoustic filtering on auditory-only and audiovisual word and sentence recognition in children with typical hearing. Previous studies show that visual speech provides greater access to consonant place of articulation than other consonant features and that low-pass filtering has a strong impact on perception on acoustic consonant place of articulation. This suggests visual speech may be particularly useful when acoustic speech is low-pass filtered because it provides complementary information about consonant place of articulation.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!