Search papers, labs, and topics across Lattice.
The paper introduces the Omnivorous Vision Encoder, a framework designed to learn modality-agnostic feature representations by aligning features from different modalities of the same scene. This is achieved through a dual objective: maximizing feature alignment between modalities and distilling knowledge from a frozen DINOv2 teacher model. The resulting encoder produces consistent embeddings across modalities (RGB, Depth, Segmentation), improving cross-modal understanding without sacrificing the discriminative power of the original DINOv2 features.
DINOv2's impressive unimodal performance doesn't translate to cross-modal understanding, but a simple training tweak can align embeddings across RGB, depth, and segmentation without sacrificing feature quality.
Pre-trained vision encoders like DINOv2 have demonstrated exceptional performance on unimodal tasks. However, we observe that their feature representations are poorly aligned across different modalities. For instance, the feature embedding for an RGB image and its corresponding depth map of the same scene exhibit a cosine similarity that is nearly identical to that of two random, unrelated images. To address this, we propose the Omnivorous Vision Encoder, a novel framework that learns a modality-agnostic feature space. We train the encoder with a dual objective: first, to maximize the feature alignment between different modalities of the same scene; and second, a distillation objective that anchors the learned representations to the output of a fully frozen teacher such as DINOv2. The resulting student encoder becomes"omnivorous"by producing a consistent, powerful embedding for a given scene, regardless of the input modality (RGB, Depth, Segmentation, etc.). This approach enables robust cross-modal understanding while retaining the discriminative semantics of the original foundation model.