Search papers, labs, and topics across Lattice.
This paper provides the first systematic survey of the attack and defense landscape for AI agents, highlighting the unique security challenges arising from the integration of LLMs with non-AI systems. It analyzes the design space, attack vectors, and defense mechanisms relevant to securing AI agent systems, and introduces a framework for understanding security risks and defense strategies. Case studies expose existing gaps in securing agentic AI systems, paving the way for future research in this domain.
Securing AI agents demands a new security paradigm, as their integration of LLMs with traditional systems introduces vulnerabilities beyond those of standard software.
AI agents that combine large language models with non-AI system components are rapidly emerging in real-world applications, offering unprecedented automation and flexibility. However, this unprecedented flexibility introduces complex security challenges fundamentally different from those in traditional software systems. This paper presents the first systematic and comprehensive survey of AI agent security, including an analysis of the design space, attack landscape, and defense mechanisms for secure AI agent systems. We further conduct case studies to point out existing gaps in securing agentic AI systems and identify open challenges in this emerging domain. Our work also introduces the first systematic framework for understanding the security risks and defense strategies of AI agents, serving as a foundation for building both secure agentic systems and advancing research in this critical area.