Search papers, labs, and topics across Lattice.
This paper introduces FogFool, a novel adversarial attack framework that generates physically plausible fog-based perturbations in remote sensing imagery by optimizing atmospheric patterns using Perlin noise. By mimicking natural fog formations, FogFool creates adversarial examples that are visually realistic and effectively deceive deep learning models. Experiments on benchmark datasets demonstrate FogFool's superior performance in white-box settings, exceptional black-box transferability (83.74% TASR), and robustness against common defenses, highlighting its potential as a persistent threat.
Forget pixel-level noise: FogFool shows that physically-plausible, atmospherically-modeled fog can achieve 84% transfer attack success rate against remote sensing image classifiers, even surviving JPEG compression.
Adversarial attacks pose a severe threat to the reliability of deep learning models in remote sensing (RS) image classification. Most existing methods rely on direct pixel-wise perturbations, failing to exploit the inherent atmospheric characteristics of RS imagery or survive real-world image degradations. In this paper, we propose FogFool, a physically plausible adversarial framework that generates fog-based perturbations by iteratively optimizing atmospheric patterns based on Perlin noise. By modeling fog formations with natural, irregular structures, FogFool generates adversarial examples that are not only visually consistent with authentic RS scenes but also deceptive. By leveraging the spatial coherence and mid-to-low-frequency nature of atmospheric phenomena, FogFool embeds adversarial information into structural features shared across diverse architectures. Extensive experiments on two benchmark RS datasets demonstrate that FogFool achieves superior performance: not only does it exceed in white-box settings, but also exhibits exceptional black-box transferability (reaching 83.74% TASR) and robustness against common preprocessing-based defenses such as JPEG compression and filtering. Detailed analyses, including confusion matrices and Class Activation Map (CAM) visualizations, reveal that our atmospheric-driven perturbations induce a universal shift in model attention. These results indicate that FogFool represents a practical, stealthy, and highly persistent threat to RS classification systems, providing a robust benchmark for evaluating model reliability in complex environments.