Search papers, labs, and topics across Lattice.
The paper introduces VIBEPASS, a benchmark to evaluate LLMs' ability to self-diagnose and repair subtle faults in code, crucial for autonomous software engineering. VIBEPASS decomposes the task into Fault-Triggering Test Generation (FT-Test) and Fault-targeted Program Repair (FPR) using competitive programming problems with LLM-generated solutions containing semantic edge cases. Results show that fault-targeted reasoning is the bottleneck, as models struggle with discriminative test generation despite high rates of syntactically valid tests, and that poorly targeted self-generated tests can degrade repair performance.
LLMs can generate syntactically correct tests, but their ability to *reason* about code faults is surprisingly poor, hindering autonomous debugging.
As Large Language Models shift the programming toward human-guided''vibe coding'', agentic coding tools increasingly rely on models to self-diagnose and repair their own subtle faults -- a capability central to autonomous software engineering yet never systematically evaluated. We present \name{}, the first empirical decomposition that jointly evaluates two coupled tasks: \emph{Fault-Triggering Test Generation (FT-Test)} constructing a discriminative witness that exposes a latent bug, and \emph{Fault-targeted Program Repair (FPR)}, repairing it under varying diagnostic conditions. \name{} pairs competitive programming problems with LLM-generated solutions that pass partial test suites but fail on semantic edge cases, enabling controlled identification of where the diagnostic chain breaks down. Evaluating 12 frontier LLMs, we find that fault-targeted reasoning does not scale with general coding ability. Models produce syntactically valid test inputs at near-ceiling rates yet collapse on discriminative generation, with fault hypothesis generation -- not output validation -- as the dominant bottleneck. Test-guided repair reveals a complementary insight: when self-generated tests successfully witness a fault, the resulting repair matches or outperforms repair guided by externally provided tests, but tests that fail to witness the fault actively degrade repair below unguided baselines. Together, these results reframe the challenge of autonomous debugging: the binding bottleneck is not code synthesis or test validity but fault-target reasoning, a capability that remains deficient across all frontier models. As Large Language Models shift the programming toward human-guided''vibe coding'', agentic coding tools increasingly rely on models to self-diagnose and repair their own subtle faults -- a capability central to autonomous software engineering yet never systematically evaluated.