Search papers, labs, and topics across Lattice.
2
0
5
0
Agents that ace long-context recall can still bomb when they need to use that memory to actually *do* something, revealing a critical flaw in how we currently evaluate memory in AI.
Forget fine-tuning: DM0 shows that pretraining a VLA model from scratch on diverse embodied and non-embodied data leads to SOTA performance in physical AI tasks.