Search papers, labs, and topics across Lattice.
2
0
6
Forget fine-tuning: Prompt-Level Distillation lets small models match frontier reasoning performance by distilling explicit reasoning patterns into structured system prompts.
LLMs still struggle with infrequently occurring knowledge, and this paper provides a structured framework to understand why, how we can fix it, and what the implications are for responsible AI.