Search papers, labs, and topics across Lattice.
4 papers published across 2 labs.
Two heads are better than one: combining verbalized confidence and self-consistency with just two samples dramatically boosts uncertainty estimation in reasoning models, beating either signal alone even with much larger sampling budgets.
Forget rephrasing: stitching synthetic text into "megadocs" unlocks surprisingly better pre-training, especially for long-context tasks, and keeps improving as you scale.
Forget buying new GPUs – clever context-length routing can boost your LLM inference energy efficiency by 2.5x, dwarfing the 1.7x gain from upgrading to a B200.
Optimizing multilingual training? Shapley values reveal the hidden cross-lingual transfer effects that current scaling laws miss, leading to better language mixture ratios.