Search papers, labs, and topics across Lattice.
The paper introduces AlignDistil, a token-level language model alignment method that optimizes each token in a response based on a learned reward signal. AlignDistil reformulates RLHF as a token-level distillation process, using a teacher distribution derived from a combination of a DPO model and a reference model. By incorporating a contrastive DPO reward and a token-adaptive logit extrapolation mechanism, AlignDistil achieves superior performance and faster convergence compared to existing response-level alignment methods.
Token-level alignment, powered by a novel distillation approach, lets LLMs learn faster and better by avoiding the pitfalls of response-level reward optimization.
In modern large language models (LLMs), LLM alignment is of crucial importance and is typically achieved through methods such as reinforcement learning from human feedback (RLHF) and direct preference optimization (DPO). However, in most existing methods for LLM alignment, all tokens in the response are optimized using a sparse, response-level reward or preference annotation. The ignorance of token-level rewards may erroneously punish high-quality tokens or encourage low-quality tokens, resulting in suboptimal performance and slow convergence speed. To address this issue, we propose AlignDistil, an RLHF-equivalent distillation method for token-level reward optimization. Specifically, we introduce the reward learned by DPO into the RLHF objective and theoretically prove the equivalence between this objective and a token-level distillation process, where the teacher distribution linearly combines the logits from the DPO model and a reference model. On this basis, we further bridge the accuracy gap between the reward from the DPO model and the pure reward model, by building a contrastive DPO reward with a normal and a reverse DPO model. Moreover, to avoid under- and over-optimization on different tokens, we design a token adaptive logit extrapolation mechanism to construct an appropriate teacher distribution for each token. Experimental results demonstrate the superiority of our AlignDistil over existing methods and showcase fast convergence due to its token-level distributional reward optimization.