Search papers, labs, and topics across Lattice.
The paper introduces AJAR, a novel red-teaming framework designed to evaluate the action security of LLM-based autonomous agents by orchestrating complex, multi-turn attacks. AJAR uses a Protocol-driven Cognitive Orchestration approach built on Petri and the Model Context Protocol (MCP) to decouple adversarial logic from the execution loop, enabling modular integration of attack algorithms like X-Teaming. A qualitative case study demonstrates AJAR's ability to perform stateful backtracking in tool-use environments, revealing a trade-off where tool usage introduces new vulnerabilities but also disrupts persona-based attacks due to cognitive load.
Tool-using LLMs introduce a surprising "Agentic Gap" where the complexity of formatting parameters can actually *reduce* the effectiveness of some persona-based jailbreaks.
As Large Language Models (LLMs) evolve from static chatbots into autonomous agents capable of tool execution, the landscape of AI safety is shifting from content moderation to action security. However, existing red-teaming frameworks remain bifurcated: they either focus on rigid, script-based text attacks or lack the architectural modularity to simulate complex, multi-turn agentic exploitations. In this paper, we introduce AJAR (Adaptive Jailbreak Architecture for Red-teaming), a proof-of-concept framework designed to bridge this gap through Protocol-driven Cognitive Orchestration. Built upon the robust runtime of Petri, AJAR leverages the Model Context Protocol (MCP) to decouple adversarial logic from the execution loop, encapsulating state-of-the-art algorithms like X-Teaming as standardized, plug-and-play services. We validate the architectural feasibility of AJAR through a controlled qualitative case study, demonstrating its ability to perform stateful backtracking within a tool-use environment. Furthermore, our preliminary exploration of the"Agentic Gap"reveals a complex safety dynamic: while tool usage introduces new injection vectors via code execution, the cognitive load of parameter formatting can inadvertently disrupt persona-based attacks. AJAR is open-sourced to facilitate the standardized, environment-aware evaluation of this emerging attack surface. The code and data are available at https://github.com/douyipu/ajar.