Search papers, labs, and topics across Lattice.
7
0
10
4
Robots can now better assemble boxes in the real world thanks to a video-generative value model that anticipates future states, moving beyond static snapshots for more reliable task progress assessment.
Agentic models can learn to trust their "gut" and rely less on external tools, leading to faster and more accurate reasoning.
LLMs reason better when their uncertainty consistently decreases, paving the way for shorter, more accurate chain-of-thought reasoning.
Forget mixed-precision: tunable INT8 emulation can simultaneously boost accuracy and performance in FP64 HPC workloads on GPUs.
Robots can now plan 9x faster and achieve significantly higher success rates by decoupling action prediction from video generation in World-Action Models.
Ditch the handcrafted coefficients: DyWeight learns how to dynamically weight gradients in diffusion model sampling, slashing compute while boosting image quality.
Forget end-to-end VLAs: GigaBrain-0.5M* leverages world models and reinforcement learning to achieve a 30% performance boost on complex robotic manipulation tasks, showcasing reliable long-horizon execution.