Search papers, labs, and topics across Lattice.
This paper introduces a distributionally robust reinforcement learning from human feedback (DR-RLHF) approach to improve the out-of-distribution (OOD) generalization of large language models (LLMs) fine-tuned with human preferences. The authors formulate distributionally robust optimization (DRO) versions of reward-based RLHF and reward-free DPO, providing convergence guarantees for minibatch gradient descent algorithms. Empirical results demonstrate that DR-RLHF enhances reward model accuracy and policy optimization performance on OOD tasks, particularly in reasoning.
RLHF models can be made significantly more robust to distribution shift by incorporating distributionally robust optimization into both reward modeling and policy optimization.
Reinforcement learning from human feedback (RLHF) has evolved to be one of the main methods for fine-tuning large language models (LLMs). However, existing RLHF methods are non-robust, and their performance deteriorates if the downstream task differs significantly from the preference dataset used in fine-tuning. In order to mitigate this problem, we introduce a distributionally robust RLHF for fine-tuning LLMs. In particular, our goal is to ensure that a fine-tuned model retains its performance even when the distribution of prompts significantly differs from the distribution encountered during fine-tuning. We formulate distributionally robust optimization (DRO) version of two popular fine-tuning methods -- (1) reward-based RLHF and (2) reward-free DPO (direct preference optimization). We propose a minibatch gradient descent based algorithms for both of them, and theoretically prove convergence guarantees for the algorithms. Subsequently, we evaluate our algorithms on an out-of-distribution (OOD) task by first training the model on the Unified-Feedback dataset and evaluating its performance on two different datasets. The experimental results show that our robust training improves the accuracy of the learned reward models on average, and markedly on some tasks, such as reasoning. Furthermore, we show that the robust versions of policy optimization methods, similarly improve performance on OOD tasks.