Search papers, labs, and topics across Lattice.
This paper investigates the effectiveness of multi-generation sampling for detecting jailbreak behavior in aligned LLMs using the JailbreakBench Behaviors dataset. They find that single-output evaluation significantly underestimates jailbreak vulnerability, and that moderate multi-sample auditing provides a more reliable estimate of model vulnerability. They also show that detection signals partially generalize across models, with stronger transfer within related model families, and that lexical detectors capture topic-specific cues in addition to harmful behavior.
Single-shot jailbreak detection misses a shocking amount of harmful LLM behavior, meaning current safety evaluations are likely overoptimistic.
Detecting jailbreak behaviour in large language models remains challenging, particularly when strongly aligned models produce harmful outputs only rarely. In this work, we present an empirical study of output based jailbreak detection under realistic conditions using the JailbreakBench Behaviors dataset and multiple generator models with varying alignment strengths. We evaluate both a lexical TF-IDF detector and a generation inconsistency based detector across different sampling budgets. Our results show that single output evaluation systematically underestimates jailbreak vulnerability, as increasing the number of sampled generations reveals additional harmful behaviour. The most significant improvements occur when moving from a single generation to moderate sampling, while larger sampling budgets yield diminishing returns. Cross generator experiments demonstrate that detection signals partially generalise across models, with stronger transfer observed within related model families. A category level analysis further reveals that lexical detectors capture a mixture of behavioural signals and topic specific cues, rather than purely harmful behaviour. Overall, our findings suggest that moderate multi sample auditing provides a more reliable and practical approach for estimating model vulnerability and improving jailbreak detection in large language models. Code will be released.