Search papers, labs, and topics across Lattice.
Trojan-Speak, a novel adversarial fine-tuning method, leverages curriculum learning and GRPO-based hybrid reinforcement learning to teach models communication protocols that evade Anthropic's Constitutional Classifiers. This approach achieves 99+% classifier evasion on 14B+ parameter models while maintaining >95% of original reasoning capabilities, a significant improvement over prior adversarial fine-tuning methods. The work demonstrates successful evasion on expert-level CBRN queries, highlighting vulnerabilities in LLM-based content classifiers when adversaries have fine-tuning access.
Adversarial fine-tuning can now bypass Constitutional AI safety measures with almost no performance penalty, enabling models to provide detailed instructions on dangerous topics like CBRN warfare.
Fine-tuning APIs offered by major AI providers create new attack surfaces where adversaries can bypass safety measures through targeted fine-tuning. We introduce Trojan-Speak, an adversarial fine-tuning method that bypasses Anthropic's Constitutional Classifiers. Our approach uses curriculum learning combined with GRPO-based hybrid reinforcement learning to teach models a communication protocol that evades LLM-based content classification. Crucially, while prior adversarial fine-tuning approaches report more than 25% capability degradation on reasoning benchmarks, Trojan-Speak incurs less than 5% degradation while achieving 99+% classifier evasion for models with 14B+ parameters. We demonstrate that fine-tuned models can provide detailed responses to expert-level CBRN (Chemical, Biological, Radiological, and Nuclear) queries from Anthropic's Constitutional Classifiers bug-bounty program. Our findings reveal that LLM-based content classifiers alone are insufficient for preventing dangerous information disclosure when adversaries have fine-tuning access, and we show that activation-level probes can substantially improve robustness to such attacks.