Search papers, labs, and topics across Lattice.
This paper identifies a vulnerability in Dummy Class defenses, which use a "dummy" class as a sink for adversarial examples, leading to inflated robustness scores under standard attacks like AutoAttack. They propose Dummy-Aware Weighted Attack (DAWA), which simultaneously targets both the true and dummy labels with adaptive weighting during adversarial example generation. Experiments show DAWA significantly reduces the measured robustness of Dummy Class defenses, demonstrating the inadequacy of existing evaluation methods.
Dummy Class defenses, which appear robust under standard adversarial attacks, crumble when attacked with a novel DAWA method that targets both the true and dummy labels.
Adversarial robustness evaluation faces a critical challenge as new defense paradigms emerge that can exploit limitations in existing assessment methods. This paper reveals that Dummy Classes-based defenses, which introduce an additional"dummy"class as a safety sink for adversarial examples, achieve significantly overestimated robustness under conventional evaluation strategies like AutoAttack. The fundamental limitation stems from these attacks'singular focus on misleading the true class label, which aligns perfectly with the defense mechanism--successful attacks are simply captured by the dummy class. To address this gap, we propose Dummy-Aware Weighted Attack (DAWA), a novel evaluation method that simultaneously targets both the true label and dummy label with adaptive weighting during adversarial example synthesis. Extensive experiments demonstrate that DAWA effectively breaks this defense paradigm, reducing the measured robustness of a leading Dummy Classes-based defense from 58.61% to 29.52% on CIFAR-10 under l_infty perturbation (epsilon=8/255). Our work provides a more reliable benchmark for evaluating this emerging class of defenses and highlights the need for continuous evolution of robustness assessment methodologies.