Search papers, labs, and topics across Lattice.
The paper introduces Bench-2-CoP, a framework using LLM-as-judge analysis to map the coverage of 194,955 benchmark questions against the EU AI Act's taxonomy, addressing the gap between current AI benchmarks and regulatory needs. The study reveals a significant misalignment, with benchmarks heavily focused on hallucination and performance reliability while neglecting critical functional capabilities related to loss-of-control scenarios. The authors demonstrate that current benchmarks are insufficient for comprehensive risk assessment required for EU AI Act compliance.
Current AI benchmarks overwhelmingly focus on hallucination and reliability, completely missing crucial loss-of-control risks like autonomous AI development and evading human oversight, rendering them inadequate for EU AI Act compliance.
The rapid advancement of General Purpose AI (GPAI) models necessitates robust evaluation frameworks, especially with emerging regulations like the EU AI Act and its associated Code of Practice (CoP). Current AI evaluation practices depend heavily on established benchmarks, but these tools were not designed to measure the systemic risks that are the focus of the new regulatory landscape. This research addresses the urgent need to quantify this"benchmark-regulation gap."We introduce Bench-2-CoP, a novel, systematic framework that uses validated LLM-as-judge analysis to map the coverage of 194,955 questions from widely-used benchmarks against the EU AI Act's taxonomy of model capabilities and propensities. Our findings reveal a profound misalignment: the evaluation ecosystem dedicates the vast majority of its focus to a narrow set of behavioral propensities. On average, benchmarks devote 61.6% of their regulatory-relevant questions to"Tendency to hallucinate"and 31.2% to"Lack of performance reliability", while critical functional capabilities are dangerously neglected. Crucially, capabilities central to loss-of-control scenarios, including evading human oversight, self-replication, and autonomous AI development, receive zero coverage in the entire benchmark corpus. This study provides the first comprehensive, quantitative analysis of this gap, demonstrating that current public benchmarks are insufficient, on their own, for providing the evidence of comprehensive risk assessment required for regulatory compliance and offering critical insights for the development of next-generation evaluation tools.