Search papers, labs, and topics across Lattice.
3
0
5
Unimodal models might already understand each other better than we thought: a shared relational structure, formalized via category theory, unlocks zero-shot cross-modal alignment.
MLLMs that ace standard Referring Expression Comprehension benchmarks still stumble when faced with images designed to eliminate shortcuts, revealing a surprising lack of robust visual reasoning.
You can drastically improve text-to-image retrieval from short, ambiguous queries by using a language model to generate richer, quality-aware descriptions.