Objective: We investigate the clinical effectiveness of a novel deep learning-based noise reduction (NR) approach under noisy conditions with challenging noise types at low signal to noise ratio (SNR) levels for Mandarin-speaking cochlear implant (CI) recipients.

Design: The deep learning-based NR approach used in this study consists of two modules: noise classifier (NC) and deep denoising autoencoder (DDAE), thus termed (NC + DDAE). In a series of comprehensive experiments, we conduct qualitative and quantitative analyses on the NC module and the overall NC + DDAE approach. Moreover, we evaluate the speech recognition performance of the NC + DDAE NR and classical single-microphone NR approaches for Mandarin-speaking CI recipients under different noisy conditions. The testing set contains Mandarin sentences corrupted by two types of maskers, two-talker babble noise, and a construction jackhammer noise, at 0 and 5 dB SNR levels. Two conventional NR techniques and the proposed deep learning-based approach are used to process the noisy utterances. We qualitatively compare the NR approaches by the amplitude envelope and spectrogram plots of the processed utterances. Quantitative objective measures include (1) normalized covariance measure to test the intelligibility of the utterances processed by each of the NR approaches; and (2) speech recognition tests conducted by nine Mandarin-speaking CI recipients. These nine CI recipients use their own clinical speech processors during testing.

Results: The experimental results of objective evaluation and listening test indicate that under challenging listening conditions, the proposed NC + DDAE NR approach yields higher intelligibility scores than the two compared classical NR techniques, under both matched and mismatched training-testing conditions.

Conclusions: When compared to the two well-known conventional NR techniques under challenging listening condition, the proposed NC + DDAE NR approach has superior noise suppression capabilities and gives less distortion for the key speech envelope information, thus, improving speech recognition more effectively for Mandarin CI recipients. The results suggest that the proposed deep learning-based NR approach can potentially be integrated into existing CI signal processors to overcome the degradation of speech perception caused by noise.

Download full-text PDF

Source
http://dx.doi.org/10.1097/AUD.0000000000000537DOI Listing

Publication Analysis

Top Keywords

deep learning-based
20
learning-based approach
12
ddae approach
12
speech recognition
12
noise
9
learning-based noise
8
noise reduction
8
approach
8
reduction approach
8
cochlear implant
8

Similar Publications

Spinal tissue identification using a Forward-oriented endoscopic ultrasound technique.

Biomed Eng Lett

January 2025

School of Information Science and Technology, ShanghaiTech University, No. 393 Middle Huaxia Road, Pudong New District, Shanghai, 201210 China.

The limited imaging depth of optical endoscope restrains the identification of tissues under surface during the minimally invasive spine surgery (MISS), thus increasing the risk of critical tissue damage. This study is proposed to improve the accuracy and effectiveness of automatic spinal soft tissue identification using a forward-oriented ultrasound endoscopic system. Total 758 ex-vivo soft tissue samples were collected from ovine spines to create a dataset with four categories including spinal cord, nucleus pulposus, adipose tissue, and nerve root.

View Article and Find Full Text PDF

Gaussianmorph: deformable medical image registration with Gaussian noise constraints.

Biomed Eng Lett

January 2025

School of Information Science and Engineering, LinYi University, Linyi, 276000 Shandong China.

Deep learning-based image registration methods offer advantages of time efficiency and registration outcomes by automatically extracting enough image features. Currently, more and more scholars choose to use cascaded networks to achieve coarse-to-fine registration. Although cascaded networks take a lot of time in the training and inference stages, they can improve registration performance.

View Article and Find Full Text PDF

Enhanced diagnosis of pes planus and pes cavus using deep learning-based segmentation of weight-bearing lateral foot radiographs: a comparative observer study.

Biomed Eng Lett

January 2025

Department of Biomedical Engineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea.

Unlabelled: A weight-bearing lateral radiograph (WBLR) of the foot is a gold standard for diagnosing adult-acquired flatfoot deformity. However, it is difficult to measure the major axis of bones in WBLR without using auxiliary lines. Herein, we develop semantic segmentation with a deep learning model (DLm) on the WBLR of the foot for enhanced diagnosis of pes planus and pes cavus.

View Article and Find Full Text PDF

Objective: To prospectively evaluate the effect of accelerated deep learning-based reconstruction (Accel-DL) on improving brain magnetic resonance imaging (MRI) quality and reducing scan time compared to that in conventional MRI.

Materials And Methods: This study included 150 participants (51 male; mean age 57.3 ± 16.

View Article and Find Full Text PDF

Background: A comprehensive analysis of the occlusal plane (OP) inclination in predicting anteroposterior mandibular position (APMP) changes is still lacking. This study aimed to analyse the relationships between inclinations of different OPs and APMP metrics and explore the feasibility of OP inclination in predicting changes in APMP.

Methods: Overall, 115 three-dimensional (3D) models were reconstructed using deep learning-based cone-beam computed tomography (CBCT) segmentation, and their accuracy in supporting cusps was compared with that of intraoral scanning models.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!