Search papers, labs, and topics across Lattice.
This paper introduces a population-based evolutionary framework for adapting large language models (LLMs) to new tasks, drawing inspiration from natural evolution. The framework evolves a population of LLMs through crossover, mutation, selection, and succession operations, enabling rapid adaptation with limited data (200 samples per task) and without gradient-based optimization. Experiments across 12 datasets demonstrate that the evolutionary approach outperforms existing LLM merging and adaptation techniques, achieving accuracy improvements of up to 54.8% compared to the initial LLM population.
Forget fine-tuning: this evolutionary approach lets you adapt LLMs to new tasks with just 200 samples and no gradients, outperforming standard methods by up to 54.8%.
Evolution, the engine behind the survival and growth of life on Earth, operates through the population-based process of reproduction. Inspired by this principle, this paper formally defines a newly emerging problem -- the population-based evolution of large language models (LLMs) -- and introduces a novel framework. Starting with a population of parent LLMs, our framework enables the population to evolve through four key operations: (i) crossover, merging the weights of different parents to create offspring LLMs, (ii) mutation, introducing small, random changes to model weights to foster diversity, (iii) selection, prioritizing high-performing models, and (iv) succession, transferring the learned experience from parent to offspring LLMs. With only 200 samples per new task, the LLM population evolves rapidly to adapt to the task at hand, without any gradients. Experiments on 12 datasets show that our framework consistently outperforms existing multi-LLM merging and adaptation methods, achieving accuracy gains of up to 54.8% over the best LLM in the initial population. Moreover, our framework allows for the evolution of LLMs across multiple new tasks simultaneously, scaling effectively with populations of up to 40 LLMs, and even zero-shot generalization to unseen held-out tasks. We have open-sourced the code on GitHub and released the weights of 10 parent LLMs, fine-tuned from gemma-2-2b-it, on HuggingFace$, enabling reproduction of our proposed framework using just a single 4090 GPU with 24GB memory, without any performance degradation.