Search papers, labs, and topics across Lattice.
The paper investigates the role of individual layers in Vision-Language Models (VLMs) and discovers the existence of Task-Interfering Layers (TILs) that hinder downstream task performance. They quantify the effect of intervening on each layer using a Task-Layer Interaction Vector and observe task-specific sensitivity patterns. Based on these findings, they propose TaLo, a training-free, test-time adaptation method that dynamically identifies and bypasses the most interfering layer, achieving significant performance improvements on various tasks and models.
VLMs have hidden modularity: selectively knocking out specific layers *improves* performance on downstream tasks by up to 16%, without any training.
Current VLMs have demonstrated capabilities across a wide range of multimodal tasks. Typically, in a pretrained VLM, all layers are engaged by default to make predictions on downstream tasks. We find that intervening on a single layer, such as by zeroing its parameters, can improve the performance on certain tasks, indicating that some layers hinder rather than help downstream tasks. We systematically investigate how individual layers influence different tasks via layer intervention. Specifically, we measure the change in performance relative to the base model after intervening on each layer and observe improvements when bypassing specific layers. This improvement can be generalizable across models and datasets, indicating the presence of Task-Interfering Layers that harm downstream tasks'performance. We introduce Task-Layer Interaction Vector, which quantifies the effect of intervening on each layer of a VLM given a task. These task-interfering layers exhibit task-specific sensitivity patterns: tasks requiring similar capabilities show consistent response trends under layer interventions, as evidenced by the high similarity in their task-layer interaction vectors. Inspired by these findings, we propose TaLo (Task-Adaptive Layer Knockout), a training-free, test-time adaptation method that dynamically identifies and bypasses the most interfering layer for a given task. Without parameter updates, TaLo improves performance across various models and datasets, including boosting Qwen-VL's accuracy on the Maps task in ScienceQA by up to 16.6%. Our work reveals an unexpected form of modularity in pretrained VLMs and provides a plug-and-play, training-free mechanism to unlock hidden capabilities at inference time. The source code will be publicly available.