Search papers, labs, and topics across Lattice.
The paper introduces BackdoorIDS, a zero-shot backdoor detection method for pre-trained vision encoders that leverages attention hijacking and restoration during progressive input masking. BackdoorIDS detects backdoors by observing that backdoored images exhibit a rapid shift in attention and embedding space as the trigger is masked, leading to multiple clusters in the embedding sequence. Experiments demonstrate BackdoorIDS's superior performance compared to existing defenses across various attack types, datasets, and model families, without requiring retraining.
Uncover hidden backdoors in your pre-trained vision encoders without retraining, simply by watching how attention shifts as you mask parts of the image.
Self-supervised and multimodal vision encoders learn strong visual representations that are widely adopted in downstream vision tasks and large vision-language models (LVLMs). However, downstream users often rely on third-party pretrained encoders with uncertain provenance, exposing them to backdoor attacks. In this work, we propose BackdoorIDS, a simple yet effective zero-shot, inference-time backdoor samples detection method for pretrained vision encoders. BackdoorIDS is motivated by two observations: Attention Hijacking and Restoration. Under progressive input masking, a backdoored image initially concentrates attention on malicious trigger features. Once the masking ratio exceeds the trigger's robustness threshold, the trigger is deactivated, and attention rapidly shifts to benign content. This transition induces a pronounced change in the image embedding, whereas embeddings of clean images evolve more smoothly across masking progress. BackdoorIDS operationalizes this signal by extracting an embedding sequence along the masking trajectory and applying density-based clustering such as DBSCAN. An input is flagged as backdoored if its embedding sequence forms more than one cluster. Extensive experiments show that BackdoorIDS consistently outperforms existing defenses across diverse attack types, datasets, and model families. Notably, it is a plug-and-play approach that requires no retraining and operates fully zero-shot at inference time, making it compatible with a wide range of encoder architectures, including CNNs, ViTs, CLIP, and LLaVA-1.5.