Search papers, labs, and topics across Lattice.
This paper introduces a scaling-law framework to systematically evaluate jailbreak attacks on LLMs, treating each attack as a compute-bounded optimization procedure and measuring success relative to FLOPs. They evaluate four jailbreak paradigms (optimization-based, self-refinement prompting, sampling-based selection, and genetic optimization) across model families and harmful goals, fitting a saturating exponential function to FLOPs--success trajectories. Results show prompting-based attacks are more compute-efficient, occupy a high-success/high-stealth region, and that vulnerability is strongly goal-dependent, with misinformation harms being easier to elicit.
Prompt-based jailbreak attacks aren't just effective, they're shockingly efficient, outperforming optimization-based methods by more effectively navigating the prompt space.
Large language models remain vulnerable to jailbreak attacks, yet we still lack a systematic understanding of how jailbreak success scales with attacker effort across methods, model families, and harm types. We initiate a scaling-law framework for jailbreaks by treating each attack as a compute-bounded optimization procedure and measuring progress on a shared FLOPs axis. Our systematic evaluation spans four representative jailbreak paradigms, covering optimization-based attacks, self-refinement prompting, sampling-based selection, and genetic optimization, across multiple model families and scales on a diverse set of harmful goals. We investigate scaling laws that relate attacker budget to attack success score by fitting a simple saturating exponential function to FLOPs--success trajectories, and we derive comparable efficiency summaries from the fitted curves. Empirically, prompting-based paradigms tend to be the most compute-efficient compared to optimization-based methods. To explain this gap, we cast prompt-based updates into an optimization view and show via a same-state comparison that prompt-based attacks more effectively optimize in prompt space. We also show that attacks occupy distinct success--stealthiness operating points with prompting-based methods occupying the high-success, high-stealth region. Finally, we find that vulnerability is strongly goal-dependent: harms involving misinformation are typically easier to elicit than other non-misinformation harms.