Search papers, labs, and topics across Lattice.
The paper introduces Agentic Verifier, a novel execution-based agent designed to improve the accuracy of LLMs on competitive programming tasks by actively generating discriminative test inputs to expose behavioral discrepancies among candidate solutions. This is achieved through multi-turn interaction with code execution environments, iteratively refining input generation using targeted counterexamples rather than random sampling. The agent is trained using a pipeline combining large-scale data synthesis, rejection fine-tuning, and agentic reinforcement learning, resulting in significant accuracy improvements (up to +10-15% in Best@K) across five competitive programming benchmarks compared to existing execution-based re-ranking methods.
LLMs can solve competitive coding problems much more reliably by actively searching for the *right* test cases, rather than relying on random or pre-defined inputs.
Large language models (LLMs) have demonstrated strong coding capabilities but still struggle to solve competitive programming problems correctly in a single attempt. Execution-based re-ranking offers a promising test-time scaling strategy, yet existing methods are constrained by either difficult test case generation or inefficient random input sampling. To address this limitation, we propose Agentic Verifier, an execution-based agent that actively reasons about program behaviors and searches for highly discriminative test inputs that expose behavioral discrepancies among candidate solutions. Through multi-turn interaction with code execution environments, the verifier iteratively refines the candidate input generator and produces targeted counterexamples rather than blindly sampling inputs. We train the verifier to acquire this discriminative input generation capability via a scalable pipeline combining large-scale data synthesis, rejection fine-tuning, and agentic reinforcement learning. Extensive experiments across five competitive programming benchmarks demonstrate consistent improvements over strong execution-based baselines, achieving up to +10-15% absolute gains in Best@K accuracy. Further analysis reveals clear test-time scaling behavior and highlights the verifier's broader potential beyond reranking.