Search papers, labs, and topics across Lattice.
This paper introduces a "reasoning with skills" approach where LLMs summarize and store reusable reasoning skills learned from deliberation and exploration, retrieving them at inference time to guide reasoning. This contrasts with "reasoning from scratch" and allows the model to avoid redundant steps. Experiments on coding and math reasoning show the approach reduces reasoning tokens and improves performance.
LLMs can be both faster and smarter: pre-learned reasoning skills cut down token usage while boosting accuracy on coding and math problems.
Reasoning LLMs often spend substantial tokens on long intermediate reasoning traces (e.g., chain-of-thought) when solving new problems. We propose to summarize and store reusable reasoning skills distilled from extensive deliberation and trial-and-error exploration, and to retrieve these skills at inference time to guide future reasoning. Unlike the prevailing \emph{reasoning from scratch} paradigm, our approach first recalls relevant skills for each query, helping the model avoid redundant detours and focus on effective solution paths. We evaluate our method on coding and mathematical reasoning tasks, and find that it significantly reduces reasoning tokens while improving overall performance. The resulting lower per-request cost indicates strong practical and economic potential for real-world deployment.