Search papers, labs, and topics across Lattice.
The paper introduces TrojanMerge, a novel framework that exploits model merging to create misaligned LLMs from initially safe models. This is achieved by embedding latent malicious components into source models via a constrained optimization problem, ensuring individual models remain safe while their merged versions exhibit high harmful response rates. Experiments across 9 LLMs show TrojanMerge consistently compromises safety alignment in merged models, even with diverse merging algorithms and hyperparameter configurations.
Merging seemingly safe LLMs can create dangerously misaligned models, thanks to a new "TrojanMerge" attack that exploits latent vulnerabilities.
Model merging has emerged as a powerful technique for combining specialized capabilities from multiple fine-tuned LLMs without additional training costs. However, the security implications of this widely-adopted practice remain critically underexplored. In this work, we reveal that model merging introduces a novel attack surface that can be systematically exploited to compromise safety alignment. We present TrojanMerge,, a framework that embeds latent malicious components into source models that remain individually benign but produce severely misaligned models when merged. Our key insight is formulating this attack as a constrained optimization problem: we construct perturbations that preserve source model safety through directional consistency constraints, maintain capabilities via Frobenius directional alignment constraints, yet combine during merging to form pre-computed attack vectors. Extensive experiments across 9 LLMs from 3 model families demonstrate that TrojanMerge, consistently achieves high harmful response rates in merged models while source models maintain safety scores comparable to unmodified versions. Our attack succeeds across diverse merging algorithms and remains effective under various hyperparameter configurations. These findings expose fundamental vulnerabilities in current model merging practices and highlight the urgent need for security-aware mechanisms.