Search papers, labs, and topics across Lattice.
This paper investigates whether reinforcement learning (RL) expands the capability boundary of LLM agents in tool-use scenarios, contrasting it with the established finding that RL primarily improves reliability in static reasoning tasks. They introduce PASS@(k,T), a novel metric varying both sampling budget (k) and interaction depth (T), to differentiate capability expansion from efficiency gains. The key result demonstrates that RL genuinely enlarges the capability boundary for compositional tool-use tasks, unlike static reasoning, and this expansion is driven by self-directed exploration.
RL unlocks genuinely new tool-use capabilities in LLMs by enabling compositional strategies that surpass what's achievable through mere re-sampling, challenging the notion that RL only improves reliability.
Does reinforcement learning genuinely expand what LLM agents can do, or merely make them more reliable? For static reasoning, recent work answers the second: base and RL pass@k curves converge at large k. We ask whether this holds for agentic tool use, where T rounds of interaction enable compositional strategies that re-sampling cannot recover. We introduce PASS@(k,T), a two-dimensional metric that jointly varies sampling budget k and interaction depth T, separating capability expansion from efficiency improvement. Our main finding is that, contrary to the static-reasoning result, tool-use RL genuinely enlarges the capability boundary: the RL agent's pass-curve pulls above the base model's and the gap widens at large k rather than converging. The expansion is specific to compositional, sequential information gathering; on simpler tasks RL behaves as prior work predicts. Under matched training data, supervised fine-tuning regresses the boundary on the same compositional tasks, isolating self-directed exploration as the causal factor. Mechanism analysis shows RL reweights the base strategy distribution toward the subset whose downstream reasoning more often yields a correct answer, with the improvement concentrated on how the agent integrates retrieved information. These results reconcile optimistic and pessimistic readings of RL for LLMs: both are correct, on different task types.