Search papers, labs, and topics across Lattice.
The paper introduces OptiMer, a method for continual pre-training (CPT) that optimizes the composition of multiple pre-trained models by searching for optimal weights of their "distribution vectors" using Bayesian optimization. This approach decouples the mixture ratio selection from the training process, allowing for post-hoc optimization. Experiments on Gemma 3 27B demonstrate that OptiMer outperforms data mixture and model averaging baselines while significantly reducing the search cost (15-35x).
Forget painstakingly tuning data mixture ratios for continual pre-training: OptiMer lets you train individual models and then *optimize* their combination weights *afterward*, cutting search costs by up to 35x.
Continual pre-training is widely used to adapt LLMs to target languages and domains, yet the mixture ratio of training data remains a sensitive hyperparameter that is expensive to tune: they must be fixed before training begins, and a suboptimal choice can waste weeks of compute. In this work, we propose OptiMer, which decouples ratio selection from training: we train one CPT model per dataset, extract each model's distribution vector, which represents the parameter shift induced by that dataset, and search for optimal composition weights post-hoc via Bayesian optimization. Experiments on Gemma 3 27B across languages (Japanese, Chinese) and domains (Math, Code) show that OptiMer consistently outperforms data mixture and model averaging baselines with 15-35 times lower search cost. Key findings reveal that 1) the optimized weights can be interpreted as data mixture ratios, and retraining with these ratios improves data mixture CPT, and 2) the same vector pool can be re-optimized for a given objective without any retraining, producing target-tailored models on demand. Our work establishes that data mixture ratio selection, traditionally a pre-training decision, can be reformulated as a post-hoc optimization over distribution vectors, offering a more flexible paradigm for continual pre-training.