Search papers, labs, and topics across Lattice.
This paper investigates the potential of using pre-trained text-to-image diffusion models, specifically Stable Diffusion, as instruction-aware visual encoders for multimodal large language models (MLLMs), addressing the limitations of CLIP in capturing fine-grained details. The authors find that diffusion features are semantically rich, encode strong image-text alignment, and can be focused on relevant regions using text conditioning. They propose a fusion strategy combining CLIP and conditional diffusion features, demonstrating improved performance on VQA and specialized MLLM benchmarks, particularly in vision-centric tasks.
Stable Diffusion can serve as a surprisingly effective, instruction-aware visual encoder for MLLMs, outperforming CLIP on tasks requiring spatial and compositional reasoning.
Recent advances in multimodal large language models (MLLMs) have enabled image-based question-answering capabilities. However, a key limitation is the use of CLIP as the visual encoder; while it can capture coarse global information, it often can miss fine-grained details that are relevant to the input query. To address these shortcomings, this work studies whether pre-trained text-to-image diffusion models can serve as instruction-aware visual encoders. Through an analysis of their internal representations, we find diffusion features are both rich in semantics and can encode strong image-text alignment. Moreover, we find that we can leverage text conditioning to focus the model on regions relevant to the input question. We then investigate how to align these features with large language models and uncover a leakage phenomenon, where the LLM can inadvertently recover information from the original diffusion prompt. We analyze the causes of this leakage and propose a mitigation strategy. Based on these insights, we explore a simple fusion strategy that utilizes both CLIP and conditional diffusion features. We evaluate our approach on both general VQA and specialized MLLM benchmarks, demonstrating the promise of diffusion models for visual understanding, particularly in vision-centric tasks that require spatial and compositional reasoning. Our project page can be found https://vatsalag99.github.io/mustafar/.