Search papers, labs, and topics across Lattice.
This paper presents a lifecycle-oriented security framework for analyzing threats to autonomous LLM agents like OpenClaw, categorizing attacks across initialization, input, inference, decision, and execution stages. Through case studies, the authors demonstrate vulnerabilities such as indirect prompt injection and memory poisoning, revealing the limitations of current point-based defenses against systemic risks. The paper then examines potential defense strategies at each lifecycle stage, including plugin vetting and intent verification.
Autonomous LLM agents are riddled with vulnerabilities, as point defenses fail to address cross-temporal and multi-stage systemic risks like memory poisoning and intent drift.
Autonomous Large Language Model (LLM) agents, exemplified by OpenClaw, demonstrate remarkable capabilities in executing complex, long-horizon tasks. However, their tightly coupled instant-messaging interaction paradigm and high-privilege execution capabilities substantially expand the system attack surface. In this paper, we present a comprehensive security threat analysis of OpenClaw. To structure our analysis, we introduce a five-layer lifecycle-oriented security framework that captures key stages of agent operation, i.e., initialization, input, inference, decision, and execution, and systematically examine compound threats across the agent's operational lifecycle, including indirect prompt injection, skill supply chain contamination, memory poisoning, and intent drift. Through detailed case studies on OpenClaw, we demonstrate the prevalence and severity of these threats and analyze the limitations of existing defenses. Our findings reveal critical weaknesses in current point-based defense mechanisms when addressing cross-temporal and multi-stage systemic risks, highlighting the need for holistic security architectures for autonomous LLM agents. Within this framework, we further examine representative defense strategies at each lifecycle stage, including plugin vetting frameworks, context-aware instruction filtering, memory integrity validation protocols, intent verification mechanisms, and capability enforcement architectures.