Search papers, labs, and topics across Lattice.
7
0
13
Self-play can be dramatically improved by exploiting the "question construction path" it generates as privileged information for self-distillation, leading to 2-3x faster learning.
LLM agents can get 18% better at tasks by co-evolving their skills and tools, instead of learning them separately.
Current VLMs, despite excelling at general reasoning, still fail to accurately identify food and estimate nutrition, even when given multiple views and chain-of-thought prompting.
Unlock interpretable and reliable predictions for ultra-large complex systems, like climate patterns, by inferring governing equations at scales previously inaccessible.
Achieve state-of-the-art semi-supervised crowd instance segmentation and counting by generating high-quality mask supervision from sparse annotations, effectively bridging the gap between these two tasks.
Predict catastrophic shifts in climate, ecosystems, and economics far earlier than previously possible by combining reservoir computing with dynamical systems analysis.
Forget noisy, biased LLM evaluators: CDRRM distills preference insights into compact rubrics, letting a frozen judge model leapfrog fully fine-tuned baselines with just 3k training samples.