Search papers, labs, and topics across Lattice.
3
0
4
1
Robots can now better assemble boxes in the real world thanks to a video-generative value model that anticipates future states, moving beyond static snapshots for more reliable task progress assessment.
Achieve 100% success rates in visually ambiguous manipulation tasks by fusing high-frequency tactile data with low-frequency visual planning, outperforming visual-only baselines and satisfying hard real-time constraints.
Forget end-to-end VLAs: GigaBrain-0.5M* leverages world models and reinforcement learning to achieve a 30% performance boost on complex robotic manipulation tasks, showcasing reliable long-horizon execution.