Search papers, labs, and topics across Lattice.
The paper introduces SafeAudit, a framework for systematically auditing the safety of LLM agents interacting with external tools. SafeAudit uses an LLM-based enumerator to generate diverse tool-call workflows and user scenarios, then assesses safety using a novel "rule-resistance" metric to identify interaction patterns missed by existing benchmarks. Experiments across 3 benchmarks and 12 environments revealed that SafeAudit uncovers over 20% more unsafe behaviors, demonstrating significant gaps in current agent safety evaluations.
Current LLM agent safety benchmarks are missing over 20% of unsafe behaviors, even after agents pass the benchmark.
Large Language Model (LLM) agents increasingly act through external tools, making their safety contingent on tool-call workflows rather than text generation alone. While recent benchmarks evaluate agents across diverse environments and risk categories, a fundamental question remains unanswered: how complete are existing test suites, and what unsafe interaction patterns persist even after an agent passes the benchmark? We propose SafeAudit, a meta-audit framework that addresses this gap through two contributions. First, an LLM-based enumerator that systematically generates test cases by enumerating valid tool-call workflows and diverse user scenarios. Second, we introduce rule-resistance, a non-semantic, quantitative metric that distills compact safety rules from existing benchmarks and identifies unsafe interaction patterns that remain uncovered under those rules. Across 3 benchmarks and 12 environments, SafeAudit uncovers more than 20% residual unsafe behaviors that existing benchmarks fail to expose, with coverage growing monotonically as the testing budget increases. Our results highlight significant completeness gaps in current safety evaluation and motivate meta-auditing as a necessary complement to benchmark-based agent safety testing.