Search papers, labs, and topics across Lattice.
Fish Audio S2 is introduced as an open-sourced text-to-speech (TTS) system with multi-speaker, multi-turn generation capabilities, controlled via natural language instructions. A multi-stage training recipe and data pipeline were developed, incorporating video and speech captioning, voice quality assessment, and reward modeling to scale training. The system achieves a real-time factor (RTF) of 0.195 and a time-to-first-audio below 100ms, with model weights, fine-tuning code, and an SGLang-based inference engine released for production use.
Open-source TTS gets a serious upgrade with Fish Audio S2, offering instruction-following control via natural language and production-ready streaming performance.
We introduce Fish Audio S2, an open-sourced text-to-speech system featuring multi-speaker, multi-turn generation, and, most importantly, instruction-following control via natural-language descriptions. To scale training, we develop a multi-stage training recipe together with a staged data pipeline covering video captioning and speech captioning, voice-quality assessment, and reward modeling. To push the frontier of open-source TTS, we release our model weights, fine-tuning code, and an SGLang-based inference engine. The inference engine is production-ready for streaming, achieving an RTF of 0.195 and a time-to-first-audio below 100 ms.Our code and weights are available on GitHub (https://github.com/fishaudio/fish-speech) and Hugging Face (https://huggingface.co/fishaudio/s2-pro). We highly encourage readers to visit https://fish.audio to try custom voices.