Search papers, labs, and topics across Lattice.
The paper introduces Crucial-Diff, a diffusion-based framework to synthesize crucial image and annotation pairs for training object detection and segmentation models in data-scarce scenarios. Crucial-Diff uses a Scene Agnostic Feature Extractor (SAFE) to capture target information and a Weakness Aware Sample Miner (WASM) that leverages feedback from a downstream model to generate hard-to-detect samples. Experiments on MVTec and a polyp dataset demonstrate that Crucial-Diff generates diverse, high-quality training data, achieving state-of-the-art performance in pixel-level AP, F1-MAX, mIoU, and mDice.
Stop generating repetitive synthetic data: Crucial-Diff leverages downstream model feedback to synthesize "crucial" training samples that specifically address model weaknesses, boosting performance in data-scarce scenarios.
The scarcity of data in various scenarios, such as medical, industry and autonomous driving, leads to model overfitting and dataset imbalance, thus hindering effective detection and segmentation performance. Existing studies employ the generative models to synthesize more training samples to mitigate data scarcity. However, these synthetic samples are repetitive or simplistic and fail to provide “crucial information” that targets the downstream model’s weaknesses. Additionally, these methods typically require separate training for different objects, leading to computational inefficiencies. To address these issues, we propose Crucial-Diff, a domain-agnostic framework designed to synthesize crucial samples. Our method integrates two key modules. The Scene Agnostic Feature Extractor (SAFE) utilizes a unified feature extractor to capture target information. The Weakness Aware Sample Miner (WASM) generates hard-to-detect samples using feedback from the detection results of downstream model, which is then fused with the output of SAFE module. Together, our Crucial-Diff framework generates diverse, high-quality training data, achieving a pixel-level AP of 83.63% and an F1-MAX of 78.12% on MVTec. On polyp dataset, Crucial-Diff reaches an mIoU of 81.64% and an mDice of 87.69%. Code is publicly available at https://github.com/JJessicaYao/Crucial-diff