Search papers, labs, and topics across Lattice.
The paper introduces TIPSv2, a vision-language pretraining framework designed to improve patch-text alignment in foundational models. They found that patch-level distillation surprisingly boosts alignment, even surpassing the teacher model's performance, and propose iBOT++, an improved masked image objective. By combining iBOT++, a modified EMA setup, and a caption sampling strategy, TIPSv2 achieves state-of-the-art results across a diverse set of downstream vision tasks.
Distilling patch-text alignment knowledge from a teacher model to a student surprisingly *improves* the student's alignment beyond that of the teacher.
Recent progress in vision-language pretraining has enabled significant improvements to many downstream computer vision applications, such as classification, retrieval, segmentation and depth prediction. However, a fundamental capability that these models still struggle with is aligning dense patch representations with text embeddings of corresponding concepts. In this work, we investigate this critical issue and propose novel techniques to enhance this capability in foundational vision-language models. First, we reveal that a patch-level distillation procedure significantly boosts dense patch-text alignment -- surprisingly, the patch-text alignment of the distilled student model strongly surpasses that of the teacher model. This observation inspires us to consider modifications to pretraining recipes, leading us to propose iBOT++, an upgrade to the commonly-used iBOT masked image objective, where unmasked tokens also contribute directly to the loss. This dramatically enhances patch-text alignment of pretrained models. Additionally, to improve vision-language pretraining efficiency and effectiveness, we modify the exponential moving average setup in the learning recipe, and introduce a caption sampling strategy to benefit from synthetic captions at different granularities. Combining these components, we develop TIPSv2, a new family of image-text encoder models suitable for a wide range of downstream applications. Through comprehensive experiments on 9 tasks and 20 datasets, we demonstrate strong performance, generally on par with or better than recent vision encoder models. Code and models are released via our project page at https://gdm-tipsv2.github.io/ .