Search papers, labs, and topics across Lattice.
The paper introduces SafeDPO, a simplified approach to aligning LLMs with safety objectives by directly optimizing a safety-aligned objective function in a single stage, inspired by Direct Preference Optimization (DPO). SafeDPO enhances safety by introducing a single hyperparameter to control the trade-off between preference alignment and safety, avoiding the complexities of fitting separate reward and cost models or sampling during fine-tuning. Experiments demonstrate that SafeDPO achieves competitive performance in both aligning with human preferences and improving safety compared to existing safety alignment algorithms.
SafeDPO simplifies LLM safety alignment to a single-stage DPO process, achieving state-of-the-art safety with minimal added complexity.
As Large Language Models (LLMs) continue to advance and find applications across a growing number of fields, ensuring the safety of LLMs has become increasingly critical. To address safety concerns, recent studies have proposed integrating safety constraints into Reinforcement Learning from Human Feedback (RLHF). However, these approaches tend to be complex, as they encompass complicated procedures in RLHF along with additional steps required by the safety constraints. Inspired by Direct Preference Optimization (DPO), we introduce a new algorithm called SafeDPO, which is designed to directly optimize the safety alignment objective in a single stage of policy learning, without requiring relaxation. SafeDPO introduces only one additional hyperparameter to further enhance safety and requires only minor modifications to standard DPO. As a result, it eliminates the need to fit separate reward and cost models or to sample from the language model during fine-tuning, while still enhancing the safety of LLMs. Finally, we demonstrate that SafeDPO achieves competitive performance compared to state-of-the-art safety alignment algorithms, both in terms of aligning with human preferences and improving safety.