Objective: In a cochlear implant (CI) speech processor, noise reduction (NR) is a critical component for enabling CI users to attain improved speech perception under noisy conditions. Identifying an effective NR approach has long been a key topic in CI research.
Method: Recently, a deep denoising autoencoder (DDAE) based NR approach was proposed and shown to be effective in restoring clean speech from noisy observations. It was also shown that DDAE could provide better performance than several existing NR methods in standardized objective evaluations. Following this success with normal speech, this paper further investigated the performance of DDAE-based NR to improve the intelligibility of envelope-based vocoded speech, which simulates speech signal processing in existing CI devices.
Results: We compared the performance of speech intelligibility between DDAE-based NR and conventional single-microphone NR approaches using the noise vocoder simulation. The results of both objective evaluations and listening test showed that, under the conditions of nonstationary noise distortion, DDAE-based NR yielded higher intelligibility scores than conventional NR approaches.
Conclusion And Significance: This study confirmed that DDAE-based NR could potentially be integrated into a CI processor to provide more benefits to CI users under noisy conditions.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/TBME.2016.2613960 | DOI Listing |
Brief Bioinform
November 2024
School of Electrical Engineering and Automation, Hefei University of Technology, Hefei, Anhui, China.
Despite significant advancements in single-cell representation learning, scalability and managing sparsity and dropout events continue to challenge the field as scRNA-seq datasets expand. While current computational tools struggle to maintain both efficiency and accuracy, the accurate connection of these dropout events to specific biological functions usually requires additional, complex experiments, often hampered by potential inaccuracies in cell-type annotation. To tackle these challenges, the Zero-Inflated Graph Attention Collaborative Learning (ZIGACL) method has been developed.
View Article and Find Full Text PDFPhys Med Biol
January 2025
The Division of Imaging Sciences and Biomedical Engineering, King's College London, 5th Floor Becket House, London, SE1 7EH, UNITED KINGDOM OF GREAT BRITAIN AND NORTHERN IRELAND.
Multiplexed positron emission tomography (mPET) imaging allows simultaneous observation of physiological and pathological information from multiple tracers in a single PET scan. Although supervised deep learning has demonstrated superior performance in mPET image separation compared to purely model-based methods, acquiring large amounts of paired single-tracer data and multi-tracer data for training poses a practical challenge and needs extended scan durations for patients. In addition, the generalisation ability of the supervised learning framework is a concern, as the patient being scanned and their tracer kinetics may potentially fall outside the training distribution.
View Article and Find Full Text PDFSensors (Basel)
December 2024
CeMOS Research and Transfer Center, Mannheim University of Applied Sciences, 68163 Mannheim, Germany.
Advancements in Raman light sheet microscopy have provided a powerful, non-invasive, marker-free method for imaging complex 3D biological structures, such as cell cultures and spheroids. By combining 3D tomograms made by Rayleigh scattering, Raman scattering, and fluorescence detection, this modality captures complementary spatial and molecular data, critical for biomedical research, histology, and drug discovery. Despite its capabilities, Raman light sheet microscopy faces inherent limitations, including low signal intensity, high noise levels, and restricted spatial resolution, which impede the visualization of fine subcellular structures.
View Article and Find Full Text PDFSensors (Basel)
December 2024
Department of Electrical Engineering, Center for Innovative Research on Aging Society (CIRAS), Advanced Institute of Manufacturing with High-Tech Innovations (AIM-HI), National Chung Cheng University, Chia-Yi 621, Taiwan.
In computer vision, accurately estimating a 3D human skeleton from a single RGB image remains a challenging task. Inspired by the advantages of multi-view approaches, we propose a method of predicting enhanced 2D skeletons (specifically, predicting the joints' relative depths) from multiple virtual viewpoints based on a single real-view image. By fusing these virtual-viewpoint skeletons, we can then estimate the final 3D human skeleton more accurately.
View Article and Find Full Text PDFBioengineering (Basel)
December 2024
Department of Medical Biophysics, University of Toronto, Toronto, ON M4N 3M5, Canada.
Most existing methods for magnetic resonance imaging (MRI) reconstruction with deep learning use fully supervised training, which assumes that a fully sampled dataset with a high signal-to-noise ratio (SNR) is available for training. In many circumstances, however, such a dataset is highly impractical or even technically infeasible to acquire. Recently, a number of self-supervised methods for MRI reconstruction have been proposed, which use sub-sampled data only.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!