Search papers, labs, and topics across Lattice.
5
0
9
0
Forget test-time training: this work bakes optimal control directly into LLMs, yielding up to 27.8% gains in mathematical reasoning.
Musculoskeletal robots can now play table tennis, thanks to a hierarchical RL approach that cleverly sidesteps the curse of high-dimensional muscle control.
Agentic RL can now beat proprietary LLMs and torch.compile in the challenging domain of CUDA kernel generation, achieving up to 40% speedups on hard tasks.
Training Gemini-scale models just got a whole lot faster: veScale-FSDP boosts throughput by up to 66% and cuts memory use by 30% compared to existing FSDP implementations.
Forget task-specific architectures: a single Vision-Language-Action foundation model, ABot-N0, now dominates embodied navigation across five distinct tasks.