RESIDUAL RECURRENT NEURAL NETWORK FOR SPEECH ENHANCEMENT.

Proc IEEE Int Conf Acoust Speech Signal Process

Rutgers, the State University of New Jersey, USA.

Published: May 2020

Most current speech enhancement models use spectrogram features that require an expensive transformation and result in phase information loss. Previous work has overcome these issues by using convolutional networks to learn the temporal correlations across high-resolution waveforms. These models, however, are limited by memory-intensive dilated convolution and aliasing artifacts from upsampling. We introduce an end-to-end fully recurrent neural network for single-channel speech enhancement. The network structured as an hourglass-shape that can efficiently capture long-range temporal dependencies by reducing the features resolution without information loss. Also, we use residual connections to prevent gradient decay over layers and improve the model generalization. Experimental results show that our model outperforms state-of-the-art approaches in six quantitative evaluation metrics.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7954533PMC
http://dx.doi.org/10.1109/icassp40776.2020.9053544DOI Listing

Publication Analysis

Top Keywords

speech enhancement
12
recurrent neural
8
neural network
8
residual recurrent
4
network speech
4
enhancement current
4
current speech
4
enhancement models
4
models spectrogram
4
spectrogram features
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!