Search papers, labs, and topics across Lattice.
TurboTalk addresses the computational bottleneck of multi-step denoising in audio-driven talking avatar generation by distilling a diffusion model into a single-step generator. They use a two-stage progressive distillation framework, first employing Distribution Matching Distillation to create a stable 4-step student, then reducing steps to one via adversarial distillation. The key innovation is a progressive timestep sampling strategy and self-compare adversarial objective that stabilizes training during extreme step reduction, achieving a 120x speedup with maintained quality.
Generate realistic talking avatars 120x faster by distilling multi-step diffusion models into a single-step generator, without sacrificing quality.
Existing audio-driven video digital human generation models rely on multi-step denoising, resulting in substantial computational overhead that severely limits their deployment in real-world settings. While one-step distillation approaches can significantly accelerate inference, they often suffer from training instability. To address this challenge, we propose TurboTalk, a two-stage progressive distillation framework that effectively compresses a multi-step audio-driven video diffusion model into a single-step generator. We first adopt Distribution Matching Distillation to obtain a strong and stable 4-step student, and then progressively reduce the denoising steps from 4 to 1 through adversarial distillation. To ensure stable training under extreme step reduction, we introduce a progressive timestep sampling strategy and a self-compare adversarial objective that provides an intermediate adversarial reference that stabilizes progressive distillation. Our method achieve single-step generation of video talking avatar, boosting inference speed by 120 times while maintaining high generation quality.