Search papers, labs, and topics across Lattice.
AlphaFlowTSE is introduced as a one-step conditional generative model for target speaker extraction (TSE) that leverages a Jacobian-vector product (JVP)-free AlphaFlow objective. The model learns a direct mixture-to-target trajectory, avoiding the need for mixture-dependent time coordinates and auxiliary mixing-ratio prediction, which are common in other one-step methods. Experiments on Libri2Mix and REAL-T demonstrate that AlphaFlowTSE improves target-speaker similarity and generalization to real mixtures, leading to better ASR performance.
Ditch slow, multi-step sampling for target speaker extraction: AlphaFlowTSE achieves faster, one-step generation with improved speaker similarity and real-world generalization.
In target speaker extraction (TSE), we aim to recover target speech from a multi-talker mixture using a short enrollment utterance as reference. Recent studies on diffusion and flow-matching generators have improved target-speech fidelity. However, multi-step sampling increases latency, and one-step solutions often rely on a mixture-dependent time coordinate that can be unreliable for real-world conversations. We present AlphaFlowTSE, a one-step conditional generative model trained with a Jacobian-vector product (JVP)-free AlphaFlow objective. AlphaFlowTSE learns mean-velocity transport along a mixture-to-target trajectory starting from the observed mixture, eliminating auxiliary mixing-ratio prediction, and stabilizes training by combining flow matching with an interval-consistency teacher-student target. Experiments on Libri2Mix and REAL-T confirm that AlphaFlowTSE improves target-speaker similarity and real-mixture generalization for downstream automatic speech recognition (ASR).