Search papers, labs, and topics across Lattice.
4
36
5
10
Seemingly impressive VLA performance on robotic benchmarks crumbles when stress-tested with causal interventions, exposing a reliance on brittle shortcuts rather than genuine embodied reasoning.
A million-sequence, high-quality, open-source motion dataset finally lets text-to-motion models generalize beyond toy benchmarks.
Stop averaging over noisy robot data: PTR selectively trusts training samples based on how well their post-action consequences align with learned representations, leading to more robust offline policy learning.
Forget synthetic data and limited teleoperation: Being-H0 leverages the dexterity and scalability of human hand videos for VLA pretraining, unlocking superior performance in complex manipulation tasks.