Search papers, labs, and topics across Lattice.
The paper introduces ATRIE, a persona-driven speech synthesis framework that disentangles timbre and prosody using a Persona-Prosody Dual-Track (P2-DT) architecture. ATRIE leverages Scalar Quantization for a static Timbre Track and Hierarchical Flow-Matching for a dynamic Prosody Track, both distilled from a 14B LLM. Results on an extended AnimeTTS-Bench demonstrate state-of-the-art performance in identity preservation (0.04 EER) and cross-modal retrieval (0.75 mAP), showcasing its ability to generate high-fidelity character voices with consistent persona traits across diverse emotions.
Finally, anime avatars can convincingly express a full range of emotions without losing their unique vocal identity.
High-fidelity character voice synthesis is a cornerstone of immersive multimedia applications, particularly for interacting with anime avatars and digital humans. However, existing systems struggle to maintain consistent persona traits across diverse emotional contexts. To bridge this gap, we present ATRIE, a unified framework utilizing a Persona-Prosody Dual-Track (P2-DT) architecture. Our system disentangles generation into a static Timbre Track (via Scalar Quantization) and a dynamic Prosody Track (via Hierarchical Flow-Matching), distilled from a 14B LLM teacher. This design enables robust identity preservation (Zero-Shot Speaker Verification EER: 0.04) and rich emotional expression. Evaluated on our extended AnimeTTS-Bench (50 characters), ATRIE achieves state-of-the-art performance in both generation and cross-modal retrieval (mAP: 0.75), establishing a new paradigm for persona-driven multimedia content creation.