Search papers, labs, and topics across Lattice.
This paper introduces Hybrid Policy Distillation (HPD), a knowledge distillation method for LLMs that unifies forward and reverse KL divergence within a reweighted log-likelihood objective. HPD balances mode coverage and mode-seeking by integrating the strengths of both KL directions and combines off-policy data with approximate on-policy sampling. Experiments across math reasoning, dialogue, and code generation tasks show that HPD improves optimization stability, computational efficiency, and overall performance compared to existing KD methods.
Achieve better LLM knowledge distillation by blending the best of both forward and reverse KL divergence, leading to more stable training and improved performance.
Knowledge distillation (KD) is a powerful paradigm for compressing large language models (LLMs), whose effectiveness depends on intertwined choices of divergence direction, optimization strategy, and data regime. We break down the design of existing KD methods and present a unified view that establishes connections between them, reformulating KD as a reweighted log-likelihood objective at the token level. We further propose Hybrid Policy Distillation (HPD), which integrates the complementary advantages of forward and reverse KL to balance mode coverage and mode-seeking, and combines off-policy data with lightweight, approximate on-policy sampling. We validate HPD on long-generation math reasoning as well as short-generation dialogue and code tasks, demonstrating improved optimization stability, computational efficiency, and final performance across diverse model families and scales. The code related to this work is available at https://github.com/zwhong714/Hybrid-Policy-Distillation.