Search papers, labs, and topics across Lattice.
The paper identifies suboptimal regularization and counter-intuitive interpolation behaviors in Direct Preference Optimization (DPO) and related methods, which stem from their reliance on reparameterizations to induce an implicit reward. To address these limitations, they introduce Explicit Preference Optimization (EXPO), a framework that directly incorporates regularization factors without relying on reparameterization. EXPO provably satisfies regularization desiderata that DPO variants do not, and empirical results demonstrate its effectiveness.
DPO's "implicit reward" reparameterization leads to suboptimal regularization, but EXPO offers a fix with explicit, intuitive regularization factors that provably work better.
The generated responses of large language models (LLMs) are often fine-tuned to human preferences through a process called reinforcement learning from human feedback (RLHF). As RLHF relies on a challenging training sequence, whereby a separate reward model is independently learned and then later applied to LLM policy updates, ongoing research effort has targeted more straightforward alternatives. In this regard, direct preference optimization (DPO) and its many offshoots circumvent the need for a separate reward training step. Instead, through the judicious use of a reparameterization trick that induces an \textit{implicit} reward, DPO and related methods consolidate learning to the minimization of a single loss function. And yet despite demonstrable success in some real-world settings, we prove that DPO-based objectives are nonetheless subject to sub-optimal regularization and counter-intuitive interpolation behaviors, underappreciated artifacts of the reparameterizations upon which they are based. To this end, we introduce an \textit{explicit} preference optimization framework termed EXPO that requires no analogous reparameterization to achieve an implicit reward. Quite differently, we merely posit intuitively-appealing regularization factors from scratch that transparently avoid the potential pitfalls of key DPO variants, provably satisfying regularization desiderata that prior methods do not. Empirical results serve to corroborate our analyses and showcase the efficacy of EXPO.