This paper proposes a sound event detection (SED) method in tunnels to prevent further uncontrollable accidents. Tunnel accidents are accompanied by crashes and tire skids, which usually produce abnormal sounds. Since the tunnel environment always has a severe level of noise, the detection accuracy can be greatly reduced in the existing methods. To deal with the noise issue in the tunnel environment, the proposed method involves the preprocessing of tunnel acoustic signals and a classifier for detecting acoustic events in tunnels. For preprocessing, a non-negative tensor factorization (NTF) technique is used to separate the acoustic event signal from the noisy signal in the tunnel. In particular, the NTF technique developed in this paper consists of source separation and online noise learning. In other words, the noise basis is adapted by an online noise learning technique for enhancement in adverse noise conditions. Next, a convolutional recurrent neural network (CRNN) is extended to accommodate the contributions of the separated event signal and noise to the event detection; thus, the proposed CRNN is composed of event convolution layers and noise convolution layers in parallel followed by recurrent layers and the output layer. Here, a set of mel-filterbank feature parameters is used as the input features. Evaluations of the proposed method are conducted on two datasets: a publicly available road audio events dataset and a tunnel audio dataset recorded in a real traffic tunnel for six months. In the first evaluation where the background noise is low, the proposed CRNN-based SED method with online noise learning reduces the relative recognition error rate by 56.25% when compared to the conventional CRNN-based method with noise. In the second evaluation, where the tunnel background noise is more severe than in the first evaluation, the proposed CRNN-based SED method yields superior performance when compared to the conventional methods. In particular, it is shown that among all of the compared methods, the proposed method with the online noise learning provides the best recognition rate of 91.07% and reduces the recognition error rates by 47.40% and 28.56% when compared to the Gaussian mixture model (GMM)-hidden Markov model (HMM)-based and conventional CRNN-based SED methods, respectively. The computational complexity measurements also show that the proposed CRNN-based SED method requires a processing time of 599 ms for both the NTF-based source separation with online noise learning and CRNN classification when the tunnel noisy signal is one second long, which implies that the proposed method detects events in real-time.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6631336 | PMC |
http://dx.doi.org/10.3390/s19122695 | DOI Listing |
Ann Intern Med
January 2025
Department of Epidemiology and Welch Center for Prevention, Epidemiology, and Clinical Research, Johns Hopkins Bloomberg School of Public Health, Baltimore; and Department of Pediatrics, Johns Hopkins University School of Medicine, Baltimore, Maryland (T.M.B.).
Background: Guidelines emphasize quiet settings for blood pressure (BP) measurement.
Objective: To determine the effect of noise and public environment on BP readings.
Design: Randomized crossover trial of adults in Baltimore, Maryland.
PLoS One
January 2025
Institute of Robotics and Cybernetics, Faculty of Electrical Engineering and Information Technology Slovak University of Technology in Bratislava, Bratislava, Slovakia.
This paper introduces a novel approach for the offline estimation of stationary moving average processes, further extending it to efficient online estimation of non-stationary processes. The novelty lies in a unique technique to solve the autocorrelation function matching problem leveraging that the autocorrelation function of a colored noise is equal to the autocorrelation function of the coefficients of the moving average process. This enables the derivation of a system of nonlinear equations to be solved for estimating the model parameters.
View Article and Find Full Text PDFBrain Sci
December 2024
Research and Development Department, Hashir International Specialist Clinics & Research Institute for Misophonia, Tinnitus and Hyperacusis Ltd., 167-169 Great Portland Street, London W1W 5PF, UK.
The Sound Sensitivity Symptoms Questionnaire version 2 (SSSQ2) is a brief clinical tool with six items designed to be used (1) as a measure for severity of sound sensitivity symptoms in general (based on its total score) and (2) as a checklist to screen different forms of sound sensitivity. The objective of this study was to assess the psychometric properties of the SSSQ2. This was a cross-sectional study.
View Article and Find Full Text PDFAudiol Res
January 2025
Otolaryngology Unit, Department of Traslational Medicine and Neuroscience-DiBrain, University of Bari, 70124 Bari, Italy.
Aim: The aim of this study was to assess the subjective experiences of adults with different cochlear implant (CI) configurations-unilateral cochlear implant (UCI), bilateral cochlear implant (BCI), and bimodal stimulation (BM)-focusing on their perception of speech in quiet and noisy environments, music, environmental sounds, people's voices and tinnitus.
Methods: A cross-sectional survey of 130 adults who had undergone UCI, BCI, or BM was conducted. Participants completed a six-item online questionnaire, assessing difficulty levels and psychological impact across auditory domains, with responses measured on a 10-point scale.
J Neuroeng Rehabil
January 2025
Dept. of Cognitive Robotics, TU Delft, Delft, Netherlands.
Background: Head-mounted displays can be used to offer personalized immersive virtual reality (IVR) training for patients who have suffered an Acquired Brain Injury (ABI) by tailoring the complexity of visual and auditory stimuli to the patient's cognitive capabilities. However, it is still an open question how these virtual environments should be designed.
Methods: We used a human-centered design approach to help define the characteristics of suitable virtual training environments for ABI patients.
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!