Search papers, labs, and topics across Lattice.
The paper introduces BenchGuard, an automated framework that uses LLMs to audit task-oriented, execution-based agent benchmarks by cross-verifying benchmark artifacts and agent solutions. BenchGuard identified 12 author-confirmed issues in ScienceAgentBench and matched 83.3% of expert-identified issues in BIXBench, revealing errors that human review had missed. This demonstrates the potential for AI-assisted benchmark development, where LLMs validate evaluation infrastructure.
LLM benchmarks are riddled with hidden flaws that even human experts miss, but can be caught with an automated LLM auditor for under $15 per benchmark.
As benchmarks grow in complexity, many apparent agent failures are not failures of the agent at all - they are failures of the benchmark itself: broken specifications, implicit assumptions, and rigid evaluation scripts that penalize valid alternative approaches. We propose employing frontier LLMs as systematic auditors of evaluation infrastructure, and realize this vision through BenchGuard, the first automated auditing framework for task-oriented, execution-based agent benchmarks. BenchGuard cross-verifies all benchmark artifacts via structured LLM protocols, optionally incorporating agent solutions or execution traces as additional diagnostic evidence. Deployed on two prominent scientific benchmarks, BenchGuard identified 12 author-confirmed issues in ScienceAgentBench - including fatal errors rendering tasks unsolvable - and exactly matched 83.3% of expert-identified issues on the BIXBench Verified-50 subset, catching defects that prior human review missed entirely. A full audit of 50 complex bioinformatics tasks costs under USD 15, making automated benchmark auditing a practical and valuable complement to human review. These findings point toward AI-assisted benchmark development, where frontier models serve not only as subjects of evaluation but as active participants in validating the evaluation infrastructure itself.