Search papers, labs, and topics across Lattice.
The paper addresses the problem of unreliable LLM-generated tests when evaluating LLM-generated code by introducing ACES, a method that ranks tests based on their ability to distinguish correct from incorrect code using a leave-one-out AUC metric. ACES avoids explicitly determining test correctness by focusing on the consistency of each test's pass/fail pattern with the ranking induced by other tests. The method achieves state-of-the-art Pass@k scores on code generation benchmarks with two variants: a closed-form solution (ACES-C) and an iterative optimization approach (ACES-O).
Stop blindly trusting LLM-generated tests: ACES reveals which tests actually distinguish good code from bad, leading to better code evaluation without needing ground truth labels.
Selecting LLM-generated code candidates using LLM-generated tests is challenging because the tests themselves may be incorrect. Existing methods either treat all tests equally or rely on ad-hoc heuristics to filter unreliable tests. Yet determining test correctness requires knowing which codes are correct, creating a \emph{circular dependency}. Our key insight is that we need not determine test correctness at all: \emph{test votes should rank, not merely count}. What matters is not how many codes pass a test, but whether the test can \emph{distinguish} correct from incorrect code. We break the circular dependency via leave-one-out evaluation: hold out one test, rank codes by their aggregate scores on all remaining tests, and measure whether the held-out test's pass/fail pattern agrees with this ranking. We formalize this agreement as the leave-one-out AUC~(LOO-AUC) and prove that the expected LOO-AUC is proportional to each test's ability to separate correct code from incorrect code. Building on this, we propose \textbf{ACES}~(\textbf{A}UC \textbf{C}onsist\textbf{E}ncy \textbf{S}coring) with two complementary variants: ACES-C provides closed-form weights that provably approximate the oracle in expectation under a mild assumption on average test quality; ACES-O drops this assumption and iteratively optimizes a differentiable LOO-AUC objective. Both operate solely on the binary pass matrix with negligible overhead, and achieve state-of-the-art Pass@$k$ on multiple code generation benchmarks.