Search papers, labs, and topics across Lattice.
This paper introduces a unified encoder architecture for Targeted Sound Detection (TSD) that processes both reference and mixture audio within a shared representation space. By using a shared encoder, the model achieves stronger alignment between reference and mixture audio, simplifying the architecture and improving generalization. The proposed method achieves state-of-the-art performance on the URBAN-SED dataset, with a segment-level F1 score of 83.15% and an overall accuracy of 95.17%.
A shared encoder for targeted sound detection leaps past prior art, achieving a new state-of-the-art F1 score of 83.15% on URBAN-SED while simplifying the model architecture.
Human listeners exhibit the remarkable ability to segregate a desired sound from complex acoustic scenes through selective auditory attention, motivating the study of Targeted Sound Detection (TSD). The task requires detecting and localizing a target sound in a mixture when a reference audio of that sound is provided. Prior approaches, rely on generating a sound-discriminative conditional embedding vector for the reference and pairing it with a mixture encoder, jointly optimized with a multi-task learning approach. In this work, we propose a unified encoder architecture that processes both the reference and mixture audio within a shared representation space, promoting stronger alignment while reducing architectural complexity. This design choice not only simplifies the overall framework but also enhances generalization to unseen classes. Following the multi-task training paradigm, our method achieves substantial improvements over prior approaches, surpassing existing methods and establishing a new state-of-the-art benchmark for targeted sound detection, with a segment-level F1 score of 83.15% and an overall accuracy of 95.17% on the URBAN-SED dataset.