Search papers, labs, and topics across Lattice.
This paper introduces a low-latency deep learning model for real-time vocal denoising, targeting live applications where latency is critical. The model employs a band-grouped encoder-decoder architecture with frequency attention and a sigmoid-driven ideal ratio mask trained with a spectral loss. Results show PESQ-WB improvements of 0.21 and 0.12 on stationary and non-stationary noise, respectively, with a total latency under 10ms.
Real-time vocal denoising is now possible with deep learning, achieving significant SNR improvements at under 10ms latency.
Real-time, deep learning-based vocal denoising has seen significant progress over the past few years, demonstrating the capability of artificial intelligence in preserving the naturalness of the voice while increasing the signal-to-noise ratio (SNR). However, many deep learning approaches have high amounts of latency and require long frames of context, making them difficult to configure for live applications. To address these challenges, we propose a sigmoid-driven ideal ratio mask trained with a spectral loss to encourage an increased SNR and maximized perceptual quality of the voice. The proposed model uses a band-grouped encoder-decoder architecture with frequency attention and achieves a total latency of less than 10,ms, with PESQ-WB improvements of 0.21 on stationary noise and 0.12 on nonstationary noise.