Search papers, labs, and topics across Lattice.
3
0
7
0
Robots can now learn from their mistakes in real-time via a novel reflective planning framework, leading to significant performance gains in long-horizon tasks.
LLMs can be steered more effectively by viewing activation manipulation through the lens of ordinary differential equations and control theory, leading to significant gains in alignment benchmarks.
Agents that ace long-context recall can still bomb when they need to use that memory to actually *do* something, revealing a critical flaw in how we currently evaluate memory in AI.