Search papers, labs, and topics across Lattice.
This paper addresses the problem of noisy preferences in reward modeling for reinforcement learning from human feedback (RLHF) and offline preference optimization. It frames reward modeling as a classification problem and introduces Symmetric Preference Optimization (SymPO), which leverages symmetric losses to mitigate the impact of noisy preference labels. The authors theoretically prove that symmetric losses maintain rank-preserving rewards even with noisy labels, guaranteeing policy improvement, and empirically demonstrate SymPO's effectiveness on synthetic and real-world tasks.
Even with noisy human preferences, symmetric losses can guarantee rank-preserving rewards, unlocking robust policy optimization for aligning language models.
Optimizing policies based on human preferences is key to aligning language models with human intent. This work focuses on reward modeling, a core component in reinforcement learning from human feedback (RLHF), and offline preference optimization, such as direct preference optimization. Conventional approaches typically assume accurate annotations. However, real-world preference data often contains noise due to human errors or biases. We propose a principled framework for robust policy optimization under noisy preferences, viewing reward modeling as a classification problem. This allows us to leverage symmetric losses, known for their robustness to label noise in classification, leading to our Symmetric Preference Optimization (SymPO) method. We prove that symmetric losses enable successful policy optimization even under noisy labels, as the resulting reward remains rank-preserving -- a property sufficient for policy improvement. Experiments on synthetic and real-world tasks demonstrate the effectiveness of SymPO.