Search papers, labs, and topics across Lattice.
This paper introduces a test-time scaling framework for agentic coding that addresses the challenge of representing and reusing prior experience from long rollout trajectories. The framework converts rollouts into structured summaries that preserve salient information, enabling parallel scaling via Recursive Tournament Voting (RTV) and sequential scaling by adapting Parallel-Distill-Refine (PDR). Experiments on SWE-Bench Verified and Terminal-Bench v2.0 demonstrate consistent performance improvements for frontier coding agents, highlighting the importance of representation, selection, and reuse in test-time scaling.
Agentic coding gets a serious boost: representing rollouts as structured summaries and then recursively comparing them lets Claude-4.5-Opus jump from 70.9% to 77.6% on SWE-Bench Verified.
Test-time scaling has become a powerful way to improve large language models. However, existing methods are best suited to short, bounded outputs that can be directly compared, ranked or refined. Long-horizon coding agents violate this premise: each attempt produces an extended trajectory of actions, observations, errors, and partial progress taken by the agent. In this setting, the main challenge is no longer generating more attempts, but representing prior experience in a form that can be effectively selected from and reused. We propose a test-time scaling framework for agentic coding based on compact representations of rollout trajectories. Our framework converts each rollout into a structured summary that preserves its salient hypotheses, progress, and failure modes while discarding low-signal trace details. This representation enables two complementary forms of inference-time scaling. For parallel scaling, we introduce Recursive Tournament Voting (RTV), which recursively narrows a population of rollout summaries through small-group comparisons. For sequential scaling, we adapt Parallel-Distill-Refine (PDR) to the agentic setting by conditioning new rollouts on summaries distilled from prior attempts. Our method consistently improves the performance of frontier coding agents across SWE-Bench Verified and Terminal-Bench v2.0. For example, by using our method Claude-4.5-Opus improves from 70.9% to 77.6% on SWE-Bench Verified (mini-SWE-agent) and 46.9% to 59.1% on Terminal-Bench v2.0 (Terminus 1). Our results suggest that test-time scaling for long-horizon agents is fundamentally a problem of representation, selection, and reuse.