An adaptive sound classification framework is proposed for hearing aid applications. The long-term goal is to develop fully trainable instruments in which both the acoustical environments encountered in daily life and the hearing aid settings preferred by the user in each environmental class could be learned. Two adaptive classifiers are described, one based on minimum distance clustering and one on Bayesian classification. Through unsupervised learning, the adaptive systems allow classes to split or merge based on changes in the ongoing acoustical environments. Performance was evaluated using real-world sounds from a wide range of acoustical environments. The systems were first initialized using two classes, speech and noise, followed by a testing period when a third class, music, was introduced. Both systems were successful in detecting the presence of an additional class and estimating its underlying parameters, reaching a testing accuracy close to the target rates obtained from best-case scenarios derived from non-adaptive supervised versions of the classifiers (about 3% lower performance). The adaptive Bayesian classifier resulted in a 4% higher overall accuracy upon splitting adaptation than the minimum distance classifier. Merging accuracy was found to be the same in the two systems and within 1%-2% of the best-case supervised versions.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1121/1.3365301 | DOI Listing |
eNeuro
January 2025
Neurophysiology of Everyday Life Group, Department of Psychology, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany
A comprehensive analysis of everyday sound perception can be achieved using Electroencephalography (EEG) with the concurrent acquisition of information about the environment. While extensive research has been dedicated to speech perception, the complexities of auditory perception within everyday environments, specifically the types of information and the key features to extract, remain less explored. Our study aims to systematically investigate the relevance of different feature categories: discrete sound-identity markers, general cognitive state information, and acoustic representations, including discrete sound onset, the envelope, and mel-spectrogram.
View Article and Find Full Text PDFProc Natl Acad Sci U S A
January 2025
Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA 15213.
The auditory system is unique among sensory systems in its ability to phase lock to and precisely follow very fast cycle-by-cycle fluctuations in the phase of sound-driven cochlear vibrations. Yet, the perceptual role of this temporal fine structure (TFS) code is debated. This fundamental gap is attributable to our inability to experimentally manipulate TFS cues without altering other perceptually relevant cues.
View Article and Find Full Text PDFAlzheimers Dement
December 2024
Dementia Research Centre, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom.
Background: Patients with behavioural variant frontotemporal dementia (bvFTD) and right temporal variant frontotemporal dementia (rtvFTD) commonly exhibit abnormal hedonic and other behavioural responses to sounds, however hearing dysfunction in this disorder is poorly characterised. Here we addressed this issue using the Queen Square Tests of Auditory Cognition (QSTAC) - a neuropsychological battery for the systematic assessment of central auditory functions (including pitch pattern perception, environmental sound recognition, sound localisation and emotion processing) in cognitively impaired people.
Method: The QSTAC was administered to 12 patients with bvFTD, 7 patients with rtvFTD and 24 patients with comparator dementia syndromes (primary progressive aphasia and typical Alzheimer's disease) and 15 healthy age-matched individuals.
Sci Rep
January 2025
Institute for the Future of Human Society, Kyoto University, Kyoto, Japan.
Objective digital measurement of gamblers visiting gambling venues is conducted using cashless cards and facial recognition systems, but these methods are confined within a single gambling venue. Hence, we propose an objective digital measurement method using a transformer, a state-of-the-art machine learning approach, to detect total gambling venue visitations for gamblers who visit multiple gambling venues using sounds in gamblers' environments. We sampled gambling and nongambling event datasets from websites to create a gambling play classifier.
View Article and Find Full Text PDFSci Rep
January 2025
SINTEF, Department of Health Research and Department of Circulation and Medical Imaging, The Norwegian University of Science and Technology NTNU, 7491, Trondheim, Norway.
The transport of drugs into tumor cells near the center of the tumor is known to be severely hindered due to the high interstitial pressure and poor vascularization. The aim of this work is to investigate the possibility to induce acoustic streaming in a tumor. Two tumor cases (breast and abdomen) are simulated to find the acoustic streaming and temperature rise, while varying the focused ultrasound transducer radius, frequency, and power for a constant duty cycle (1%).
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!