Search papers, labs, and topics across Lattice.
The paper introduces Whisper-RIR-Mega, a benchmark dataset pairing clean LibriSpeech utterances with reverberant versions generated using real room impulse responses from the RIR-Mega corpus. This dataset allows for controlled evaluation of ASR model robustness to room acoustics, with stratified splits based on RT60 and DRR. Experiments using Whisper models (tiny to large-v3) demonstrate that reverberation consistently degrades ASR performance, with WER increases ranging from 0.12 to 1.07 percentage points.
Reverberation can significantly degrade Whisper ASR performance, even in the largest models, highlighting the need for continued research into robust speech recognition techniques.
We introduce Whisper-RIR-Mega, a benchmark dataset of paired clean and reverberant speech for evaluating automatic speech recognition (ASR) robustness to room acoustics. Each sample pairs a clean LibriSpeech utterance with the same utterance convolved with a real room impulse response from the RIR-Mega corpus, with stratified splits by reverberation time (RT60) and direct-to-reverberant ratio (DRR). We evaluate five Whisper models (tiny through large-v3) on 1600 test samples and report word error rate (WER) and character error rate (CER) under clean and reverberant conditions. Reverberation consistently degrades performance across all model sizes; the reverb penalty in WER ranges from 0.12 to 1.07 percentage points depending on the model. We release the dataset, evaluation code, and baseline results to support reproducible research on robust ASR.