Search papers, labs, and topics across Lattice.
3
0
6
LLM agent skills, despite their promise, often fail in realistic settings, with performance plummeting to no-skill baselines when agents must retrieve skills from a large, uncurated collection.
LLMs can slash compute costs by up to 67% on reasoning tasks with a new online calibration method that adapts to each input at test time.
Ditch the equivariant constraints: canonicalization lets you train simpler, faster diffusion models that actually *outperform* equivariant architectures for symmetric generative tasks like 3D molecule design.