Search papers, labs, and topics across Lattice.
GigaAI
7
0
5
13
Even with partial observability from manipulator truncation, StableIDM recovers stable action predictions, boosting downstream VLA real-robot success by 17.6%.
Get simulation-ready assets for robotics and graphics in under a second, without any manual annotation, using a new feedforward approach that jointly learns physical attributes and 3D Gaussian Splatting reconstruction from a single video.
Robots can now better assemble boxes in the real world thanks to a video-generative value model that anticipates future states, moving beyond static snapshots for more reliable task progress assessment.
Explicitly modeling depth in world-action models significantly boosts planning robustness and future prediction quality for autonomous driving.
Robots can now plan 9x faster and achieve significantly higher success rates by decoupling action prediction from video generation in World-Action Models.
Flow-based VLAs can now learn online without likelihoods or value networks, unlocking better generalization in complex embodied control tasks.
Forget end-to-end VLAs: GigaBrain-0.5M* leverages world models and reinforcement learning to achieve a 30% performance boost on complex robotic manipulation tasks, showcasing reliable long-horizon execution.