Search papers, labs, and topics across Lattice.
This paper investigates domain-adaptive pre-training (DAPT) for specializing small to mid-sized French LLMs in the biomedical domain, questioning its efficacy compared to prior work. They release a new open-licensed French biomedical corpus and specialized models, evaluating performance via causal language modeling and comparative benchmarks. Their results suggest DAPT's effectiveness is limited and highlight the importance of model merging post-DAPT to mitigate generalization trade-offs, even showing improvements on specialized tasks.
Domain-adaptive pre-training might not be worth it for French biomedical LLMs, but model merging after DAPT can actually boost performance on specialized tasks.
Large language models (LLMs) have demonstrated remarkable capabilities across diverse domains, yet their adaptation to specialized fields remains challenging, particularly for non-English languages. This study investigates domain-adaptive pre-training (DAPT) as a strategy for specializing small to mid-sized LLMs in the French biomedical domain through continued pre-training. We address two key research questions: the viability of specialized continued pre-training for domain adaptation and the relationship between domain-specific performance gains and general capability degradation. Our contributions include the release of a fully open-licensed French biomedical corpus suitable for commercial and open-source applications, the training and release of specialized French biomedical LLMs, and novel insights for DAPT implementation. Our methodology encompasses the collection and refinement of high-quality French biomedical texts, the exploration of causal language modeling approaches using DAPT, and conducting extensive comparative evaluations. Our results cast doubt on the efficacy of DAPT, in contrast to previous works, but we highlight its viability in smaller-scale, resource-constrained scenarios under the right conditions. Findings in this paper further suggest that model merging post-DAPT is essential to mitigate generalization trade-offs, and in some cases even improves performance on specialized tasks at which the DAPT was directed.