Search papers, labs, and topics across Lattice.
The paper introduces Diffusion-LPO, a novel framework for aligning text-to-image diffusion models with human preferences using listwise preference data. It extends the Direct Preference Optimization (DPO) objective under the Plackett-Luce model to handle ranked lists of images derived from user feedback, thereby capturing more precise human preferences compared to pairwise comparisons. Experiments across text-to-image generation, image editing, and personalized preference alignment demonstrate that Diffusion-LPO consistently outperforms pairwise DPO baselines in visual quality and preference alignment.
Listwise preference optimization for diffusion models (Diffusion-LPO) beats pairwise DPO baselines, finally unlocking the potential of richer ranked human feedback.
Reinforcement learning from human feedback (RLHF) has proven effectiveness for aligning text-to-image (T2I) diffusion models with human preferences. Although Direct Preference Optimization (DPO) is widely adopted for its computational efficiency and avoidance of explicit reward modeling, its applications to diffusion models have primarily relied on pairwise preferences. The precise optimization of listwise preferences remains largely unaddressed. In practice, human feedback on image preferences often contains implicit ranked information, which conveys more precise human preferences than pairwise comparisons. In this work, we propose Diffusion-LPO, a simple and effective framework for Listwise Preference Optimization in diffusion models with listwise data. Given a caption, we aggregate user feedback into a ranked list of images and derive a listwise extension of the DPO objective under the Plackett-Luce model. Diffusion-LPO enforces consistency across the entire ranking by encouraging each sample to be preferred over all of its lower-ranked alternatives. We empirically demonstrate the effectiveness of Diffusion-LPO across various tasks, including text-to-image generation, image editing, and personalized preference alignment. Diffusion-LPO consistently outperforms pairwise DPO baselines on visual quality and preference alignment.