Search papers, labs, and topics across Lattice.
Max Planck Institute for Informatics, SIC, VIA Research Center
7
0
9
Ditch the language priors: SSL-R1 unlocks verifiable rewards for MLLM reinforcement learning directly from images, using self-supervision to solve visual puzzles.
LVLMs can self-detect and correct object hallucinations by focusing on specific image regions, offering a simple, training-free fix.
SAM, designed for instance segmentation, can be surprisingly effective for semantic segmentation with weak supervision when adapted with techniques like skeleton-based prompting and iterative pseudo-label refinement.
Seemingly beneficial architectural choices for end-to-end driving, like high-resolution perception, can actually hinder scalable closed-loop performance, highlighting the need for careful co-design.
DNN neurons often fire *more* strongly when a concept is missing, revealing a blind spot in standard XAI methods that can now be addressed.
By dynamically adjusting contrastive learning temperatures based on data density, MM-TS achieves state-of-the-art results on multimodal long-tail datasets.
Mechanistic interpretability gets a formal footing: "Certified Circuits" uses data subsampling to find provably stable sub-networks, boosting accuracy by up to 91% while using 45% fewer neurons.