Search papers, labs, and topics across Lattice.
Corpus2Skill distills a document corpus into a hierarchical skill directory, enabling LLM agents to navigate and retrieve information more effectively than standard RAG. This is achieved through offline clustering of documents, generating LLM-written summaries at each level, and materializing the result as a tree of navigable skill files. Experiments on the WixQA benchmark demonstrate that Corpus2Skill outperforms dense retrieval, RAPTOR, and other agentic RAG baselines in terms of quality metrics by allowing the agent to reason about where to look, backtrack, and combine evidence across branches.
Forget brute-force retrieval: hierarchical navigation lets LLMs outperform RAG on enterprise QA by explicitly reasoning about the structure of knowledge.
Retrieval-Augmented Generation (RAG) grounds LLM responses in external evidence but treats the model as a passive consumer of search results: it never sees how the corpus is organized or what it has not yet retrieved, limiting its ability to backtrack or combine scattered evidence. We present Corpus2Skill, which distills a document corpus into a hierarchical skill directory offline and lets an LLM agent navigate it at serve time. The compilation pipeline iteratively clusters documents, generates LLM-written summaries at each level, and materializes the result as a tree of navigable skill files. At serve time, the agent receives a bird's-eye view of the corpus, drills into topic branches via progressively finer summaries, and retrieves full documents by ID. Because the hierarchy is explicitly visible, the agent can reason about where to look, backtrack from unproductive paths, and combine evidence across branches. On WixQA, an enterprise customer-support benchmark for RAG, Corpus2Skill outperforms dense retrieval, RAPTOR, and agentic RAG baselines across all quality metrics.