Search papers, labs, and topics across Lattice.
This paper addresses the challenge of efficient exploration in online Reinforcement Learning with Human Feedback (RLHF) by identifying a key limitation in existing optimism-based exploration algorithms, which often gather uninformative comparisons. They prove that these methods can lead to linear regret over exponentially long horizons. To overcome this, they introduce a novel exploration scheme that focuses preference queries on reducing uncertainty in reward differences that are most relevant to policy improvement.
Current optimism-based RLHF exploration can lead to linear regret, but a new uncertainty-focused exploration strategy achieves polynomial regret scaling in all model parameters.
Reinforcement learning with human feedback (RLHF), which learns a reward model from human preference data and then optimizes a policy to favor preferred responses, has emerged as a central paradigm for aligning large language models (LLMs) with human preferences. In this paper, we investigate exploration principles for online RLHF, where one seeks to adaptively collect new preference data to refine both the reward model and the policy in a data-efficient manner. By examining existing optimism-based exploration algorithms, we identify a drawback in their sampling protocol: they tend to gather comparisons that fail to reduce the most informative uncertainties in reward differences, and we prove lower bounds showing that such methods can incur linear regret over exponentially long horizons. Motivated by this insight, we propose a new exploration scheme that directs preference queries toward reducing uncertainty in reward differences most relevant to policy improvement. Under a multi-armed bandit model of RLHF, we establish regret bounds of order $T^{(\beta+1)/(\beta+2)}$, where $\beta>0$ is a hyperparameter that balances reward maximization against mitigating distribution shift. To our knowledge, this is the first online RLHF algorithm with regret scaling polynomially in all model parameters.