Search papers, labs, and topics across Lattice.
This paper introduces a systematic benchmark for evaluating knowledge-extraction attacks and defenses in Retrieval-Augmented Generation (RAG) systems, addressing the fragmented landscape of existing research. The benchmark encompasses a variety of attack and defense strategies, retrieval embedding models, and both open- and closed-source generators, evaluated under a unified framework with standardized protocols. The results provide actionable insights for developing privacy-preserving RAG systems and a practical foundation for future research in this area.
A unified benchmark reveals the fragmented landscape of RAG security, highlighting vulnerabilities to knowledge-extraction attacks and paving the way for robust defense strategies.
Retrieval-Augmented Generation (RAG) has become a cornerstone of knowledge-intensive applications, including enterprise chatbots, healthcare assistants, and agentic memory management. However, recent studies show that knowledge-extraction attacks can recover sensitive knowledge-base content through maliciously crafted queries, raising serious concerns about intellectual property theft and privacy leakage. While prior work has explored individual attack and defense techniques, the research landscape remains fragmented, spanning heterogeneous retrieval embeddings, diverse generation models, and evaluations based on non-standardized metrics and inconsistent datasets. To address this gap, we introduce the first systematic benchmark for knowledge-extraction attacks on RAG systems. Our benchmark covers a broad spectrum of attack and defense strategies, representative retrieval embedding models, and both open- and closed-source generators, all evaluated under a unified experimental framework with standardized protocols across multiple datasets. By consolidating the experimental landscape and enabling reproducible, comparable evaluation, this benchmark provides actionable insights and a practical foundation for developing privacy-preserving RAG systems in the face of emerging knowledge extraction threats. Our code is available here.