Search papers, labs, and topics across Lattice.
4
0
7
7
By decoupling visual and motor information during pretraining, FutureVLA unlocks more effective visuomotor prediction for vision-language-action models, boosting performance without modifying downstream architectures.
Unlock human-like dexterity in robotic manipulation by combining RL-assisted teleoperation with a novel VLA architecture that leverages force and tactile feedback.
Bimanual robots can now achieve robust dexterous grasping in the real world, thanks to a massive 20M-frame synthetic dataset and a simple attention-based policy that transfers surprisingly well.
Stop guessing about action spaces for robot manipulation: a massive empirical study reveals that predicting delta actions boosts performance, while joint vs. task space offers a stability vs. generalization tradeoff.