Search papers, labs, and topics across Lattice.
The paper investigates the application of Direct Preference Optimization (DPO) to multimodal sequential recommendation under implicit feedback, addressing the challenge of unreliable negative samples. They find that replacing deterministic hard negatives with stochastic sampling from a dynamic top-K candidate pool significantly improves ranking performance. The proposed Robust DPO (RoDPO) method, combined with a sparse Mixture-of-Experts encoder, achieves up to 5.25% NDCG@5 improvement on Amazon benchmarks.
Stochastic negative sampling in Direct Preference Optimization (DPO) dramatically improves multimodal sequential recommendation, suggesting that carefully curated "wrong" answers are key to preference learning.
Preference-based alignment objectives have been widely adopted, from RLHF-style pairwise learning in large language models to emerging applications in recommender systems. Yet, existing work rarely examines how Direct Preference Optimization (DPO) behaves under implicit feedback, where unobserved items are not reliable negatives. We conduct systematic experiments on multimodal sequential recommendation to compare common negative-selection strategies and their interaction with DPO training. Our central finding is that a simple modification, replacing deterministic hard negatives with stochastic sampling from a dynamic top-K candidate pool, consistently improves ranking performance. We attribute its effectiveness to two factors: (1) reducing erroneous suppressive gradients caused by false negatives, and (2) retaining informative hard signals while smoothing optimization via controlled stochasticity. With an optional sparse Mixture-of-Experts encoder for efficient capacity scaling, RoDPO achieves up to 5.25% NDCG@5 on three Amazon benchmarks, with nearly unchanged inference cost.