Search papers, labs, and topics across Lattice.
The paper introduces PAST, a new end-to-end framework for speech tokenization that jointly models phonetic information and signal reconstruction without relying on external pre-trained models. PAST leverages supervised phonetic data through auxiliary tasks to directly integrate phonetic domain knowledge into the tokenization process. The framework, including a streamable variant, demonstrates superior performance in phonetic representation, speech reconstruction, and as a speech representation for speech language models compared to existing baseline tokenizers.
Ditch the pre-trained models: PAST directly learns speech tokens from phonetic data, outperforming existing methods in representation and reconstruction.
We present PAST, a novel end-to-end framework that jointly models phonetic information alongside signal reconstruction, eliminating the need for external pretrained models. Unlike previous approaches that rely on pretrained self-supervised models, PAST employs supervised phonetic data, directly integrating domain knowledge into the tokenization process via auxiliary tasks. Additionally, we introduce a streamable, causal variant of PAST, enabling real-time speech applications. Results demonstrate that PAST surpasses existing evaluated baseline tokenizers across common evaluation metrics, including phonetic representation and speech reconstruction. Notably, PAST also achieves superior performance when serving as a speech representation for speech language models, further highlighting its effectiveness as a foundation for spoken language generation. To foster further research, we release the full implementation. For code, model checkpoints, and samples see: https://pages.cs.huji.ac.il/adiyoss-lab/PAST