Search papers, labs, and topics across Lattice.
This paper introduces a black-box skill stealing attack against LLM agents, exploiting the increasing reliance on skills for specialized tasks and the emergence of skill marketplaces. The authors develop an automated stealing prompt generation agent that leverages scenario rationalization, structure injection, and embedding filtering to extract proprietary skill content. Experiments across 3 commercial agent architectures and 5 LLMs demonstrate that skills can be extracted with as few as 3 interactions, highlighting significant copyright risks that current defenses struggle to fully mitigate.
LLM agents can have their proprietary skills stolen with just 3 interactions, exposing a major copyright vulnerability in the burgeoning skill marketplace.
LLM agents increasingly rely on skills to encapsulate reusable capabilities via progressively disclosed instructions. High-quality skills inject expert knowledge into general-purpose models, improving performance on specialized tasks. This quality and ease of dissemination drive the emergence of a skill economy: free skill marketplaces already report 90368 published skills, while paid marketplaces report more than 2000 listings and over $100,000 in creator earnings. Yet this growing marketplace also creates a new attack surface, as adversaries can interact with public agent to extract hidden proprietary skill content. We present the first empirical study of black-box skill stealing against LLM agent systems. To study this threat, we first derive an attack taxonomy from prior prompt-stealing methods and build an automated stealing prompt generation agent. This agent starts from model-generated seed prompts, expands them through scenario rationalization and structure injection, and enforces diversity via embedding filtering. This process yields a reproducible pipeline for evaluating agent systems. We evaluate such attacks across 3 commercial agent architectures and 5 LLMs. Our results show that agent skills can be extracted with only 3 interactions, posing a serious copyright risk. To mitigate this threat, we design defenses across three stages of the agent pipeline: input, inference, and output. Although these defenses achieve strong results, the attack remains inexpensive and readily automatable, allowing an adversary to launch repeated attempts with different variants; only one successful attempt is sufficient to compromise the protected skill. Overall, our findings suggest that these copyright risks are largely overlooked across proprietary agent ecosystems. We therefore advocate for more robust defense strategies that provide stronger protection guarantees.