Search papers, labs, and topics across Lattice.
This paper introduces ViT-AdaLA, a three-stage framework for adapting Vision Transformers (ViTs) to use linear attention, thereby reducing computational complexity. The method aligns vanilla linear attention with softmax attention, mitigates accumulated approximation errors by aligning final-layer features with a frozen softmax VFM teacher, and then fine-tunes on downstream tasks. Experiments on classification and segmentation tasks show ViT-AdaLA outperforms existing linear attention ViTs, effectively transferring knowledge from VFMs.
Ditch quadratic attention in your ViTs without sacrificing performance: ViT-AdaLA distills knowledge from pre-trained VFMs into linear attention architectures, achieving state-of-the-art results on classification and segmentation.
Vision Transformers (ViTs) based vision foundation models (VFMs) have achieved remarkable performance across diverse vision tasks, but suffer from quadratic complexity that limits scalability to long sequences. Existing linear attention approaches for ViTs are typically trained from scratch, requiring substantial computational resources, while linearization-based methods developed for large language model decoders do not transfer well to ViTs. To address these challenges, we propose ViT-AdaLA, a novel framework for effectively adapting and transferring prior knowledge from VFMs to linear attention ViTs. ViT-AdaLA consists of three stages: attention alignment, feature alignment, and supervised fine-tuning. In the attention alignment stage, we align vanilla linear attention with the original softmax-based attention in each block to approximate the behavior of softmax attention. However, residual approximation errors inevitably accumulate across layers. We mitigate this by fine-tuning the linearized ViT to align its final-layer features with a frozen softmax VFM teacher. Finally, the adapted prior knowledge is transferred to downstream tasks through supervised fine-tuning. Extensive experiments on classification and segmentation tasks demonstrate the effectiveness and generality of ViT-AdaLA over various state-of-the-art linear attention counterpart.