Search papers, labs, and topics across Lattice.
This paper theoretically analyzes the impact of sampling strategies and iterative dynamics on the alignment of large language models using preference optimization frameworks like Identity Preference Optimization and Direct Preference Optimization. It demonstrates that instance-dependent sampling improves ranking guarantees, while skewed on-policy sampling can lead to excessive concentration. Furthermore, the paper proves that iterative alignment, where the learned policy influences future sampling, can result in instability, oscillations, or entropy collapse under specific conditions, and it identifies stable regimes.
LLM alignment can be destabilized by iterative training loops using model-generated preferences, leading to oscillations or entropy collapse under certain conditions.
Standard methods for aligning large language models with human preferences learn from pairwise comparisons among sampled candidate responses and regularize toward a reference policy. Despite their effectiveness, the effects of sampling and reference choices are poorly understood theoretically. We investigate these effects through Identity Preference Optimization, a widely used preference alignment framework, and show that proper instance-dependent sampling can yield stronger ranking guarantees, while skewed on-policy sampling can induce excessive concentration under structured preferences. We then analyze iterative alignment dynamics in which the learned policy feeds back into future sampling and reference policies, reflecting a common practice of model-generated preference data. We prove that these dynamics can exhibit persistent oscillations or entropy collapse for certain parameter choices, and characterize regimes that guarantee stability. Our theoretical insights extend to Direct Preference Optimization, indicating the phenomena we captured are common to a broader class of preference-alignment methods. Experiments on real-world preference data validate our findings.