Search papers, labs, and topics across Lattice.
The paper introduces a method for transferring fine-tuning updates between different versions of large language models by extracting and applying the "diff vector" representing weight changes from fine-tuning one model to another. This approach addresses the inefficiency of retraining models from scratch with each new base model release, especially for domain-specific or multilingual tasks. Experiments demonstrate significant performance improvements on tasks like IFEval, LiveCodeBench, and Global MMLU by transferring fine-tuning updates, even surpassing the performance of the target model's instruction-tuned version without additional training.
Forget retraining from scratch: port fine-tuning updates between LLM versions and get up to 47% performance boost on tasks like instruction following, even surpassing fully fine-tuned models.
Modern LLMs struggle with efficient updates, as each new pretrained model version requires repeating expensive alignment processes. This challenge also applies to domain- or languagespecific models, where fine-tuning on specialized data must be redone for every new base model release. In this paper, we explore the transfer of fine-tuning updates between model versions. Specifically, we derive the diff vector (representing the weight changes from finetuning) from one source model version and apply it to the base model of a different target version. Through empirical evaluations on various open-weight model versions, we show that transferring diff vectors can significantly improve the performance of the target base model. For example, transferring the fine-tuning updates from Llama 3.0 8B improves Llama 3.1 8B by 46.9% on IFEval and 15.7% on LiveCodeBench without additional training, even surpassing Llama 3.1 8B Instruct. Furthermore, we demonstrate performance gains on multilingual tasks, with 4.7% and 15.5% improvements on Global MMLU for Malagasy and Turkish, respectively. We observe that these merged models provide stronger initializations for further fine-tuning. Lastly, our controlled experiments suggest that fine-tuning transfer is most effective when source and target models lie in a linearly connected region of parameter space, and we provide a theoretical analysis of our method. Taken together, fine-tuning transfer offers a cost-efficient and practical strategy for continuous LLM development. Our code is available at github.com/pjlintw/finetuning-transfer.