Search papers, labs, and topics across Lattice.
The paper investigates Specialized Pretraining (SPT), a strategy where domain-specific data is repeatedly included during pretraining, and compares it to standard pretraining followed by finetuning. SPT improves domain performance and preserves general capabilities after finetuning, reducing the pretraining tokens needed to reach a given domain performance by up to 1.75x, especially when the target domain is underrepresented in the pretraining corpus. The authors also derive overfitting scaling laws to guide the selection of optimal domain-data repetition for a given pretraining compute budget.
Stop wasting your finetuning data: Specialized Pretraining (SPT) can outperform standard pretraining and finetuning, achieving better domain performance with fewer parameters and less compute.
Real-world model deployments demand strong performance on narrow domains where data is often scarce. Typically, practitioners finetune models to specialize them, but this risks overfitting to the domain and forgetting general knowledge. We study a simple strategy, specialized pretraining (SPT), where a small domain dataset, typically reserved for finetuning, is repeated starting from pretraining as a fraction of the total tokens. Across three specialized domains (ChemPile, MusicPile, and ProofPile), SPT improves domain performance and preserves general capabilities after finetuning compared to standard pretraining. In our experiments, SPT reduces the pretraining tokens needed to reach a given domain performance by up to 1.75x. These gains grow when the target domain is underrepresented in the pretraining corpus: on domains far from web text, a 1B SPT model outperforms a 3B standard pretrained model. Beyond these empirical gains, we derive overfitting scaling laws to guide practitioners in selecting the optimal domain-data repetition for a given pretraining compute budget. Our observations reveal the finetuner's fallacy: while finetuning may appear to be the cheapest path to domain adaptation, introducing specialized domain data during pretraining stretches its utility. SPT yields better specialized domain performance (via reduced overfitting across repeated exposures) and better general domain performance (via reduced forgetting during finetuning), ultimately achieving stronger results with fewer parameters and less total compute when amortized over inference. To get the most out of domain data, incorporate it as early in training as possible.