Search papers, labs, and topics across Lattice.
4
0
6
1
Stop wasting compute on easy and impossible examples: PACED distillation focuses your student model's training on the sweet spot where it actually learns.
Quadruped robots can now continuously jump across rough, lunar-like terrain using only onboard sensors, thanks to a new dual-horizon control model and a clever hardware-in-the-loop validation platform.
Reasoning models aren't just verbose, they're actively *harmed* by their own verbosity, but a simple self-distillation trick can compress their outputs by up to 59% while boosting accuracy by up to 16 points.
Overconfident errors in RLVR monopolize probability mass and suppress exploration, but a confidence-aware penalty fixes this and boosts mathematical reasoning performance.