Search papers, labs, and topics across Lattice.
MBZUAI
5
0
9
0
Robots can now recover from failures during manipulation tasks by explicitly tracking progress against spatial subgoals, without needing extra training data or models.
By communicating in a shared latent space, Latent-DARM lets you combine the global planning of diffusion models with the fluency of autoregressive models, boosting reasoning accuracy by up to 14% while slashing token usage.
Open-ended reinforcement learning with LLM-based rewards unlocks surprisingly strong performance in medical reasoning for multimodal models, even with limited training data.
Forget cloud GPUs – a new model brings unified multimodal understanding and generation to your iPhone, running 6x faster than alternatives.
Geospatial agents can now reason more effectively about satellite imagery thanks to a new framework that aligns models with verified multi-step tool interactions and explicit reasoning traces.