Search papers, labs, and topics across Lattice.
This paper investigates the "cone effect" and modality gap in vision-language models (VLMs), particularly in the medical domain, and its impact on downstream multimodal performance. They introduce a post-hoc mechanism with a hyperparameter 位 to control cross-modal separation without retraining the VLM encoders. Experiments on generalist and medical VLMs show that reducing excessive modality gap improves downstream performance, especially on medical datasets, but optimal performance requires task-dependent intermediate separation.
Medical vision-language models perform better when the modality gap is tuned to an intermediate level, challenging the assumption that minimizing it is always optimal.
Vision-Language Models (VLMs) exhibit a characteristic"cone effect"in which nonlinear encoders map embeddings into highly concentrated regions of the representation space, contributing to cross-modal separation known as the modality gap. While this phenomenon has been widely observed, its practical impact on supervised multimodal learning -particularly in medical domains- remains unclear. In this work, we introduce a lightweight post-hoc mechanism that keeps pretrained VLM encoders frozen while continuously controlling cross-modal separation through a single hyperparameter {{\lambda}}. This enables systematic analysis of how the modality gap affects downstream multimodal performance without expensive retraining. We evaluate generalist (CLIP, SigLIP) and medically specialized (BioMedCLIP, MedSigLIP) models across diverse medical and natural datasets in a supervised multimodal settings. Results consistently show that reducing excessive modality gap improves downstream performance, with medical datasets exhibiting stronger sensitivity to gap modulation; however, fully collapsing the gap is not always optimal, and intermediate, task-dependent separation yields the best results. These findings position the modality gap as a tunable property of multimodal representations rather than a quantity that should be universally minimized.