Search papers, labs, and topics across Lattice.
AgentHazard is introduced as a benchmark to evaluate the emergence of harmful behaviors in computer-use agents, where risks arise from sequences of seemingly benign actions. The benchmark contains 2,653 instances designed to test an agent's ability to recognize and interrupt harm arising from accumulated context, repeated tool use, and dependencies across steps. Evaluations of Claude Code, OpenClaw, and IFlow, powered by models like Qwen3-Coder, reveal high vulnerability, with Claude Code achieving a 73.63% attack success rate, highlighting the limitations of model alignment in ensuring agent safety.
Autonomous agents are alarmingly easy to trick into harmful behavior, even when using aligned models: Claude Code achieves a 73.63% success rate on the AgentHazard benchmark.
Computer-use agents extend language models from text generation to persistent action over tools, files, and execution environments. Unlike chat systems, they maintain state across interactions and translate intermediate outputs into concrete actions. This creates a distinct safety challenge in that harmful behavior may emerge through sequences of individually plausible steps, including intermediate actions that appear locally acceptable but collectively lead to unauthorized actions. We present \textbf{AgentHazard}, a benchmark for evaluating harmful behavior in computer-use agents. AgentHazard contains \textbf{2,653} instances spanning diverse risk categories and attack strategies. Each instance pairs a harmful objective with a sequence of operational steps that are locally legitimate but jointly induce unsafe behavior. The benchmark evaluates whether agents can recognize and interrupt harm arising from accumulated context, repeated tool use, intermediate actions, and dependencies across steps. We evaluate AgentHazard on Claude Code, OpenClaw, and IFlow using mostly open or openly deployable models from the Qwen3, Kimi, GLM, and DeepSeek families. Our experimental results indicate that current systems remain highly vulnerable. In particular, when powered by Qwen3-Coder, Claude Code exhibits an attack success rate of \textbf{73.63\%}, suggesting that model alignment alone does not reliably guarantee the safety of autonomous agents.