Search papers, labs, and topics across Lattice.
This paper investigates the impact of multi-model synthetic preference data on the safety alignment of LLMs using Direct Preference Optimization (DPO). The study finds that while multi-model data improves general task performance, it significantly degrades safety by increasing vulnerability to jailbreaking attacks due to reward hacking. Specifically, using stronger models like GPT-4o to generate chosen responses paired with target model self-generated rejected responses exacerbates this safety degradation, whereas self-generated preference data yields better safety outcomes.
Using preference data from stronger models to align LLMs via DPO can backfire, dramatically worsening safety by making models more susceptible to jailbreaking.
Aligning large language models (LLMs) with human values is an increasingly critical step in post-training. Direct Preference Optimization (DPO) has emerged as a simple, yet effective alternative to reinforcement learning from human feedback (RLHF). Synthetic preference data with its low cost and high quality enable effective alignment through single- or multi-model generated preference data. Our study reveals a striking, safety-specific phenomenon associated with DPO alignment: Although multi-model generated data enhances performance on general tasks (ARC, Hellaswag, MMLU, TruthfulQA, Winogrande) by providing diverse responses, it also tends to facilitate reward hacking during training. This can lead to a high attack success rate (ASR) when models encounter jailbreaking prompts. The issue is particularly pronounced when employing stronger models like GPT-4o or larger models in the same family to generate chosen responses paired with target model self-generated rejected responses, resulting in dramatically poorer safety outcomes. Furthermore, with respect to safety, using solely self-generated responses (single-model generation) for both chosen and rejected pairs significantly outperforms configurations that incorporate responses from stronger models, whether used directly as chosen data or as part of a multi-model response pool. We demonstrate that multi-model preference data exhibits high linear separability between chosen and rejected responses, which allows models to exploit superficial cues rather than internalizing robust safety constraints. Our experiments, conducted on models from the Llama, Mistral, and Qwen families, consistently validate these findings.