Search papers, labs, and topics across Lattice.
This paper introduces a training-free multi-step inference method for target speaker extraction (TSE) using a frozen pretrained model. The method iteratively refines the extracted speech by generating new candidates through interpolation of the original mixture and the previous estimate, selecting the best candidate at each step. Experiments demonstrate that optimizing an intrusive metric (SI-SDRi) with ground-truth target speech yields consistent performance gains, while joint metric optimization balances extraction preferences when ground truth is unavailable.
Iteratively refining target speaker extraction *without* retraining a model unlocks significant performance gains, offering a flexible and efficient approach to speech separation.
Target speaker extraction (TSE) aims to recover a target speaker's speech from a mixture using a reference utterance as a cue. Most TSE systems adopt conditional auto-encoder architectures with one-step inference. Inspired by test-time scaling, we propose a training-free multi-step inference method that enables iterative refinement with a frozen pretrained model. At each step, new candidates are generated by interpolating the original mixture and the previous estimate, and the best candidate is selected for further refinement until convergence. Experiments show that, when ground-truth target speech is available, optimizing an intrusive metric (SI-SDRi) yields consistent gains across multiple evaluation metrics. Without ground truth, optimizing non-intrusive metrics (UTMOS or SpkSim) improves the corresponding metric but may hurt others. We therefore introduce joint metric optimization to balance these objectives, enabling controllable extraction preferences for practical deployment.