Search papers, labs, and topics across Lattice.
The paper introduces Red Team vs. Blue Team (RvB), a training-free framework for AI system hardening formulated as a sequential, imperfect-information game. RvB iteratively exposes vulnerabilities using a Red Team, prompting a Blue Team to learn defensive strategies without parameter updates. Experiments on dynamic code hardening and jailbreak defense demonstrate that RvB enables the Blue Team to learn generalizable defensive principles, achieving high defense success rates (90% and 45% respectively) and near-zero false positive rates, outperforming baselines.
Forget retraining: this Red-Blue game hardens AI systems against jailbreaks and CVEs by teaching defensive principles without parameter updates.
The dual offensive and defensive utility of Large Language Models (LLMs) highlights a critical gap in AI security: the lack of unified frameworks for dynamic, iterative adversarial adaptation hardening. To bridge this gap, we propose the Red Team vs. Blue Team (RvB) framework, formulated as a training-free, sequential, imperfect-information game. In this process, the Red Team exposes vulnerabilities, driving the Blue Team to learning effective solutions without parameter updates. We validate our framework across two challenging domains: dynamic code hardening against CVEs and guardrail optimization against jailbreaks. Our empirical results show that this interaction compels the Blue Team to learn fundamental defensive principles, leading to robust remediations that are not merely overfitted to specific exploits. RvB achieves Defense Success Rates of 90\% and 45\% across the respective tasks while maintaining near 0\% False Positive Rates, significantly surpassing baselines. This work establishes the iterative adversarial interaction framework as a practical paradigm that automates the continuous hardening of AI systems.