Search papers, labs, and topics across Lattice.
This paper tackles the problem of noisy triplet annotations in Composed Image Retrieval (CIR), specifically focusing on "hard noise" where reference and target images are highly similar but the modification text is incorrect. They identify three key challenges: Modality Suppression, Negative Anchor Deficiency, and Unlearning Backlash. To address these, they propose ConeSep, a Cone-based Robust Noise-Unlearning Compositional network, which includes Geometric Fidelity Quantization, Negative Boundary Learning, and Boundary-based Targeted Unlearning to mitigate the impact of noisy data.
CIR models struggle with noisy data because "hard noise" breaks the small loss hypothesis, but ConeSep's novel unlearning approach overcomes this to achieve state-of-the-art results.
The Composed Image Retrieval (CIR) task provides a flexible retrieval paradigm via a reference image and modification text, but it heavily relies on expensive and error-prone triplet annotations. This paper systematically investigates the Noisy Triplet Correspondence (NTC) problem introduced by annotations. We find that NTC noise, particularly ``hard noise''(i.e., the reference and target images are highly similar but the modification text is incorrect), poses a unique challenge to existing Noise Correspondence Learning (NCL) methods because it breaks the traditional ``small loss hypothesis''. We identify and elucidate three key, yet overlooked, challenges in the NTC task, namely (C1) Modality Suppression, (C2) Negative Anchor Deficiency, and (C3) Unlearning Backlash. To address these challenges, we propose a Cone-based robuSt noisE-unlearning comPositional network (ConeSep). Specifically, we first propose Geometric Fidelity Quantization, theoretically establishing and practically estimating a noise boundary to precisely locate noisy correspondence. Next, we introduce Negative Boundary Learning, which learns a ``diagonal negative combination''for each query as its explicit semantic opposite-anchor in the embedding space. Finally, we design Boundary-based Targeted Unlearning, which models the noisy correction process as an optimal transport problem, elegantly avoiding Unlearning Backlash. Extensive experiments on benchmark datasets (FashionIQ and CIRR) demonstrate that ConeSep significantly outperforms current state-of-the-art methods, which fully demonstrates the effectiveness and robustness of our method.