Search papers, labs, and topics across Lattice.
JailAgent is introduced, a novel red-teaming framework that bypasses direct prompt manipulation by instead hijacking the agent's reasoning trajectory and memory retrieval process. This is achieved through three stages: trigger extraction, reasoning hijacking, and constraint tightening, allowing for more robust attacks. Experiments demonstrate JailAgent's effectiveness across different models and scenarios, highlighting the vulnerabilities of LLM agents beyond prompt-based attacks.
LLM agents can be reliably jailbroken without modifying user prompts, revealing a critical vulnerability in their reasoning and memory mechanisms.
With the widespread application of LLM-based agents across various domains, their complexity has introduced new security threats. Existing red-team methods mostly rely on modifying user prompts, which lack adaptability to new data and may impact the agent's performance. To address the challenge, this paper proposes the JailAgent framework, which completely avoids modifying the user prompt. Specifically, it implicitly manipulates the agent's reasoning trajectory and memory retrieval with three key stages: Trigger Extraction, Reasoning Hijacking, and Constraint Tightening. Through precise trigger identification, real-time adaptive mechanisms, and an optimized objective function, JailAgent demonstrates outstanding performance in cross-model and cross-scenario environments.