Search papers, labs, and topics across Lattice.
This paper addresses the challenge of generating effective test suites using open-source LLMs, which often lack a suite-level perspective and struggle to maximize marginal gain in coverage. They formalize test suite generation as a Markov Decision Process (MDP) and leverage its monotone submodularity to develop TestDecision, a framework that transforms LLMs into neural greedy experts. TestDecision combines a greedy inference framework for test suite construction with a reinforcement learning training pipeline to enhance the LLM's sequential test generation ability, achieving significant improvements in branch coverage, execution pass rate, and bug detection compared to existing methods.
Open-source LLMs can generate test suites rivaling GPT-4.2's quality, thanks to a new framework that treats test generation as a greedy optimization problem solvable via reinforcement learning.
With the rapid evolution of LLMs, automated software testing is witnessing a paradigm shift. While proprietary models like GPT-4o demonstrate impressive capabilities, their high deployment costs and data privacy concerns make open-source LLMs the practical imperative for many academic and industrial scenarios. In the field of automated test generation, it has evolved to iterative workflows to construct test suites based on LLMs. When utilizing open-source LLMs, we empirically observe they lack a suite-level perspective, suffering from structural myopia-failing to generate new tests with large marginal gain based on the current covered status. In this paper, from the perspective of sequences, we formalize test suite generation as a MDP and demonstrate that its objective exhibits monotone submodularity, which enables an effective relaxation of this NP-hard global optimization into a tractable step-wise greedy procedure. Guided by this insight, we propose TestDecision, which transforms LLMs into neural greedy experts. TestDecision consists of two synergistic components: (1) an inference framework which implements test suite construction following a step-wise greedy strategy; and (2) a training pipeline of reinforcement learning which equips the base LLM with sequential test generation ability to maximize marginal gain. Comprehensive evaluations on the ULT benchmark demonstrate that TestDecision significantly outperforms existing advanced methods. It brings an improvement between 38.15-52.37% in branch coverage and 298.22-558.88% in execution pass rate over all base models, achieving a comparable performance on 7B backbone with a much larger proprietary LLM GPT-5.2. Furthermore, TestDecision can find 58.43-95.45% more bugs than vanilla base LLMs and exhibit superior generalization on LiveCodeBench, proving its capability to construct high-quality test suites.