Search papers, labs, and topics across Lattice.
D observations into
6
0
5
5
Achieve robust robot manipulation across diverse viewpoints without camera calibration by synthesizing novel views with a geometry-aware video diffusion model.
Pocket-sized VLA models can now achieve state-of-the-art robot manipulation performance by pre-training on a curated multimodal dataset and injecting manipulation-relevant representations into the action space.
End-to-end driving models are surprisingly bad at using navigation, but a new framework shows how to inject it for SOTA results.
A surprisingly simple VLA model, StarVLA-$\alpha$, beats more complex systems on real-world robotics tasks, suggesting that VLM backbones are more critical than intricate architectures.
Robots can now manipulate objects with greater dexterity and adaptability thanks to a new world model that leverages both vision and high-frequency tactile feedback to predict and react to contact dynamics.
Watch two robots hug and dance: Rhythm enables robust, physically plausible interactions between dual humanoids, bridging the gap from simulation to real-world deployment.