Search papers, labs, and topics across Lattice.
The paper introduces TADA, a novel tokenization scheme for speech modeling that achieves one-to-one synchronization between continuous acoustic features and text tokens to enable unified, single-stream modeling within an LLM. By using synchronous tokens and a flow matching head, TADA maintains high-fidelity audio reconstruction and allows for effective latent space modeling by an LLM. The method also introduces a text-only guidance technique to bridge the gap between text-only and text-speech models, leading to competitive TTS and SLM performance with reduced hallucinations and inference costs.
Achieve state-of-the-art TTS and SLM performance while slashing inference costs and eliminating content hallucinations by synchronizing text and acoustic tokens.
Modern Text-to-Speech (TTS) systems increasingly leverage Large Language Model (LLM) architectures to achieve scalable, high-fidelity, zero-shot generation. However, these systems typically rely on fixed-frame-rate acoustic tokenization, resulting in speech sequences that are significantly longer than, and asynchronous with their corresponding text. Beyond computational inefficiency, this sequence length disparity often triggers hallucinations in TTS and amplifies the modality gap in spoken language modeling (SLM). In this paper, we propose a novel tokenization scheme that establishes one-to-one synchronization between continuous acoustic features and text tokens, enabling unified, single-stream modeling within an LLM. We demonstrate that these synchronous tokens maintain high-fidelity audio reconstruction and can be effectively modeled in a latent space by a large language model with a flow matching head. Moreover, the ability to seamlessly toggle speech modality within the context enables text-only guidance--a technique that blends logits from text-only and text-speech modes to flexibly bridge the gap toward text-only LLM intelligence. Experimental results indicate that our approach achieves performance competitive with state-of-the-art TTS and SLM systems while virtually eliminating content hallucinations and preserving linguistic integrity, all at a significantly reduced inference cost.