Search papers, labs, and topics across Lattice.
This paper investigates catastrophic forgetting in LLMs when adapting them to the medical domain, a critical challenge for clinical applications. They propose a weight-space model merging framework that interpolates a clinical foundation model (GatorTronLlama) with a general instruction-following model (Llama-3.1-8B-Instruct). Results show that the merged models effectively mitigate catastrophic forgetting, preserve clinical domain expertise, and retain instruction-following ability, achieving performance comparable to fully fine-tuned baselines with significantly less data.
Forget fine-tuning: merging models in weight space lets you adapt LLMs to new domains without sacrificing instruction-following ability, even with limited data.
Large language models have been adopted in the medical domain for clinical documentation to reduce clinician burden. However, studies have reported that LLMs often"forget"a significant amount of instruction-following ability when fine-tuned using a task-specific medical dataset, a critical challenge in adopting general-purpose LLMs for clinical applications. This study presents a model merging framework to efficiently adapt general-purpose LLMs to the medical domain by countering this forgetting issue. By merging a clinical foundation model (GatorTronLlama) with a general instruct model (Llama-3.1-8B-Instruct) via interpolation-based merge methods, we seek to derive a domain-adapted model with strong performance on clinical tasks while retaining instruction-following ability. Comprehensive evaluation across medical benchmarks and five clinical generation tasks (e.g., radiology and discharge summarization) shows that merged models can effectively mitigate catastrophic forgetting, preserve clinical domain expertise, and retain instruction-following ability. In addition, our model merging strategies demonstrate training efficiency, achieving performance on par with fully fine-tuned baselines under severely constrained supervision (e.g., 64-shot vs. 256-shot). Consequently, weight-space merging constitutes a highly scalable solution for adapting open-source LLMs to clinical applications, facilitating broader deployment in resource-constrained healthcare environments.