Search papers, labs, and topics across Lattice.
7
0
8
15
Get simulation-ready assets for robotics and graphics in under a second, without any manual annotation, using a new feedforward approach that jointly learns physical attributes and 3D Gaussian Splatting reconstruction from a single video.
Robots can now better assemble boxes in the real world thanks to a video-generative value model that anticipates future states, moving beyond static snapshots for more reliable task progress assessment.
LLMs are more fragile than we thought: a new algorithm efficiently maps the boundaries of their trustworthiness, revealing specific topics where they're prone to bias.
Explicitly modeling depth in world-action models significantly boosts planning robustness and future prediction quality for autonomous driving.
Robots can now plan 9x faster and achieve significantly higher success rates by decoupling action prediction from video generation in World-Action Models.
Spatial reasoning could be the secret sauce for building generalist embodied agents that can drive, manipulate objects, and fly drones, all within a single model.
Flow-based VLAs can now learn online without likelihoods or value networks, unlocking better generalization in complex embodied control tasks.