Search papers, labs, and topics across Lattice.
Uni-ASR, a novel LLM-based architecture, unifies non-streaming and streaming automatic speech recognition (ASR) within a single framework. This is achieved through a joint training paradigm, allowing seamless switching between recognition modes without architectural changes. The framework incorporates a context-aware training paradigm and a co-designed fallback decoding strategy to improve streaming accuracy without increasing latency.
A single LLM can now handle both non-streaming and streaming ASR, opening the door to more flexible and efficient speech recognition systems.
Although the deep integration of the Automatic Speech Recognition (ASR) system with Large Language Models (LLMs) has significantly improved accuracy, the deployment of such systems in low-latency streaming scenarios remains challenging. In this paper, we propose Uni-ASR, a unified framework based on LLMs that integrates both non-streaming and streaming speech recognition capabilities. We propose a joint training paradigm that enables the system to seamlessly transition between two recognition modes without any architectural modifications. Furthermore, we introduce a context-aware training paradigm and a co-designed fallback decoding strategy, which can enhance streaming recognition accuracy without introducing additional latency. The experimental results demonstrate that Uni-ASR not only achieves competitive performance within non-streaming mode, but also demonstrates strong effectiveness in streaming scenarios under diverse latency constraints.