Search papers, labs, and topics across Lattice.
4
0
9
3
Latent reasoning can beat explicit Chain-of-Thought – but only if you force it to learn causal dynamics via a visual world model, not just language.
Swap out your LLM's inefficient attention mechanism for a faster one without retraining from scratch.
Current AI agents struggle to maintain accurate beliefs in evolving information environments, with performance varying significantly based on both model capability (15.4% range) and framework design (9.2%).
VLLMs can be made much faster without sacrificing accuracy by intelligently merging redundant tokens across space and time using optimal transport.