Search papers, labs, and topics across Lattice.
This paper addresses the challenge of exploration in online Reinforcement Learning from Human Feedback (RLHF) for aligning Large Language Models (LLMs) with human preferences. They theoretically motivate and empirically validate a count-based exploration bonus to encourage LLMs to explore novel prompt-response pairs during online training. The proposed Count-based Online Preference Optimization (COPO) algorithm, which uses a coin-flip counting module, demonstrates significant performance gains on instruction-following and academic benchmarks when applied to Zephyr and Llama-3 models.
LLMs can learn better from human feedback by exploring more creatively, thanks to a simple coin-flip counting method that encourages them to try new things.
Reinforcement Learning from Human Feedback (RLHF) has shown great potential in fine-tuning Large Language Models (LLMs) to align with human preferences. Existing methods perform preference alignment from a fixed dataset, which can be limited in data coverage, and the resulting reward model is hard to generalize in out-of-distribution responses. Thus, online RLHF is more desirable to empower the LLM to explore outside the support of the initial dataset by iteratively collecting the prompt-response pairs. In this paper, we study the fundamental problem in online RLHF, i.e. \emph{how to explore} for LLM. We give a theoretical motivation in linear reward assumption to show that an optimistic reward with an upper confidence bound (UCB) term leads to a provably efficient RLHF policy. Then, we reformulate our objective to direct preference optimization with an exploration term, where the UCB-term can be converted to a count-based exploration bonus. We further propose a practical algorithm, named \emph{Count-based Online Preference Optimization (COPO)}, which leverages a simple coin-flip counting module to estimate the pseudo-count of a prompt-response pair in previously collected data. COPO encourages LLMs to balance exploration and preference optimization in an iterative manner, which enlarges the exploration space and the entire data coverage of iterative LLM policies. We conduct online RLHF experiments on Zephyr and Llama-3 models. The results on instruction-following and standard academic benchmarks show that COPO significantly increases performance.