Search papers, labs, and topics across Lattice.
This paper investigates why Vision Language Models (VLMs) struggle with tasks requiring fine-grained visual perception, even when the necessary visual information is present. It finds that VLMs prioritize mapping visual information to the textual space, hindering their ability to reason about visual entities lacking corresponding language concepts. Through visual correspondence tasks and Logit Lens analyses, the study demonstrates that VLMs perform better with nameable entities and that assigning arbitrary names improves performance, suggesting a reliance on textual descriptions rather than pure visual reasoning.
VLMs are surprisingly bad at visually matching objects unless they can name them, revealing a critical reliance on textual anchors that overshadows their visual processing capabilities.
Vision Language Models (VLMs) achieve impressive performance across a wide range of multimodal tasks. However, on some tasks that demand fine-grained visual perception, they often fail even when the required information is present in their internal representations. In this work, we demonstrate that this gap arises from their narrow training pipeline which focuses on moving visual information to the textual space. Consequently, VLMs can only reason about visual entities that can be mapped to known concepts in the language space, leaving vision-focused tasks such as visual correspondence and reasoning about novel visual entities poorly supported. As a result, VLMs are severely limited in several important multimodal capabilities because they rely on brittle, hallucinated textual descriptions of visual entities that they cannot map to textual representations. We verify this behavior through visual correspondence tasks, in which VLMs must detect matching entities between two images. Testing across semantic, shape, and face correspondence tasks, we find that VLMs perform much better when the relevant entities are nameable in language than when they are unnameable. Mechanistically, our Logit Lens analyses confirm that VLMs explicitly assign semantic labels to nameable entities and surface more unique corresponding tokens compared to unnameable entities. Furthermore, we show that teaching completely arbitrary names for unknown entities improves performance, yet task-specific finetuning yields even stronger generalization without relying on language priors. Our findings suggest that current VLM failures on visual tasks reflect learned shortcuts from their training, rather than a fundamental limitation of multimodal architectures.