Search papers, labs, and topics across Lattice.
This paper addresses the problem of modality-specific distribution shifts in vision-language models during test-time adaptation, where standard entropy minimization can be detrimental due to unreliable modalities dominating the fusion process. They propose MG-MTTA, a method that freezes the backbone and adapts a lightweight gate or adapter using a novel objective function that combines fused-posterior entropy minimization with a reliability-aware gate prior based on modality consistency and conflict. Experiments on ImageNet-based benchmarks demonstrate that MG-MTTA significantly improves top-1 accuracy under textual and joint visual-textual shifts, highlighting the importance of controlling modality reliability during multimodal test-time adaptation.
Test-time adaptation of vision-language models can actually *hurt* performance when modalities shift asymmetrically; MG-MTTA fixes this by explicitly modeling modality reliability.
Vision-language models transfer well in zero-shot settings, but at deployment the visual and textual branches often shift asymmetrically. Under this condition, entropy-based test-time adaptation can sharpen the fused posterior while increasing error, because an unreliable modality may still dominate fusion. We study this failure mode through a majorization view of multimodal posteriors and cast adaptation as a constrained de-mixing problem on the fused prediction. Based on this view, we propose MG-MTTA, which keeps the backbone frozen and updates only a lightweight gate or adapter. The objective combines fused-posterior entropy minimization with a reliability-aware gate prior built from anchor-based modality consistency and cross-modal conflict. Our analysis gives conditions under which entropy reduction preserves the correct ranking and a threshold that characterizes modality-dominance failure. On the ImageNet-based benchmark, MG-MTTA improves top-1 accuracy from 57.97 to 66.51 under semantics-preserving textual shift and from 21.68 to 26.27 under joint visual-textual shift, while remaining competitive in the visual-only benchmark. These results show that multimodal test-time adaptation should control modality reliability, not just prediction entropy.