Search papers, labs, and topics across Lattice.
SkillsBench is introduced as a benchmark to evaluate the effectiveness of agent skills (structured procedural knowledge) in enhancing LLM agent performance across 86 diverse tasks. The benchmark assesses performance under three conditions: no skills, curated skills, and self-generated skills, using seven agent-model configurations. Results show that curated skills improve average pass rates by 16.2 percentage points, but self-generated skills offer no benefit, highlighting the challenge of reliable skill authoring by models.
LLMs can't reliably generate the very skills that boost their performance, and smaller models equipped with expert-crafted skills can rival larger, skill-less models.
Agent Skills are structured packages of procedural knowledge that augment LLM agents at inference time. Despite rapid adoption, there is no standard way to measure whether they actually help. We present SkillsBench, a benchmark of 86 tasks across 11 domains paired with curated Skills and deterministic verifiers. Each task is evaluated under three conditions: no Skills, curated Skills, and self-generated Skills. We test 7 agent-model configurations over 7,308 trajectories. Curated Skills raise average pass rate by 16.2 percentage points(pp), but effects vary widely by domain (+4.5pp for Software Engineering to +51.9pp for Healthcare) and 16 of 84 tasks show negative deltas. Self-generated Skills provide no benefit on average, showing that models cannot reliably author the procedural knowledge they benefit from consuming. Focused Skills with 2--3 modules outperform comprehensive documentation, and smaller models with Skills can match larger models without them.