Search papers, labs, and topics across Lattice.
The paper identifies that standard interpolation-based upsampling techniques used in explainable AI corrupt attribution signals due to aliasing and boundary bleeding. They propose Universal Semantic-Aware Upsampling (USU), a method that reformulates upsampling as a mass redistribution problem governed by model-derived semantic boundaries, preserving attribution mass and relative importance. USU is shown to satisfy formal desiderata for faithful upsampling, outperforming interpolation methods on ImageNet, CIFAR-10, and CUB-200 in terms of faithfulness and semantic coherence.
Standard upsampling methods in XAI systematically corrupt attribution signals, but a novel semantic-aware redistribution approach provably preserves attribution mass and improves explanation faithfulness.
Attribution methods in explainable AI rely on upsampling techniques that were designed for natural images, not saliency maps. Standard bilinear and bicubic interpolation systematically corrupts attribution signals through aliasing, ringing, and boundary bleeding, producing spurious high-importance regions that misrepresent model reasoning. We identify that the core issue is treating attribution upsampling as an interpolation problem that operates in isolation from the model's reasoning, rather than a mass redistribution problem where model-derived semantic boundaries must govern how importance flows. We present Universal Semantic-Aware Upsampling (USU), a principled method that reformulates upsampling through ratio-form mass redistribution operators, provably preserving attribution mass and relative importance ordering. Extending the axiomatic tradition of feature attribution to upsampling, we formalize four desiderata for faithful upsampling and prove that interpolation structurally violates three of them. These same three force any redistribution operator into a ratio form; the fourth selects the unique potential within this family, yielding USU. Controlled experiments on models with known attribution priors verify USU's formal guarantees; evaluation across ImageNet, CIFAR-10, and CUB-200 confirms consistent faithfulness improvements and qualitatively superior, semantically coherent explanations.