Search papers, labs, and topics across Lattice.
3
0
6
3
Stop hard-coding reasoning strategies for your LLM agent: a learned router that dynamically picks the best paradigm for each task boosts performance by up to 5.5%, beating even the best fixed strategy.
Coordinating embodied multi-agent systems doesn't require end-to-end training; instead, offload planning to a VLM in simulation and transfer back to the real world for execution.
Forget RLHF, self-play, or chain-of-thought: geometric consensus with sparse supervision unlocks scientific reasoning in LLMs.