Search papers, labs, and topics across Lattice.
3
70
7
8
Forget scaling laws: this humanoid robot model crushes benchmarks using 10x less data by cleverly pre-training on human videos and then fine-tuning on robot-specific movements.
Robots get smarter at in-context learning by "thinking" visually about future trajectories, leading to better generalization and success rates in manipulation tasks.
Forget painstakingly labeled real-world data – GraspVLA proves you can train a surprisingly capable grasping foundation model on a billion frames of purely synthetic action data.