Search papers, labs, and topics across Lattice.
This paper introduces Cluster-aware Upcycling, a novel initialization strategy for Mixture-of-Experts (MoE) models that leverages semantic clustering of input activations from a pretrained dense model. By initializing each expert with subspace representations of its corresponding cluster and setting router weights to cluster centroids, the method breaks expert symmetry and promotes early specialization. Experiments on CLIP ViT models demonstrate that Cluster-aware Upcycling outperforms existing methods on zero-shot and few-shot benchmarks, while also improving expert diversity and routing confidence.
Kickstart MoE training by initializing experts with semantically meaningful subspaces, leading to faster specialization and better performance than standard upcycling techniques.
Sparse Upcycling provides an efficient way to initialize a Mixture-of-Experts (MoE) model from pretrained dense weights instead of training from scratch. However, since all experts start from identical weights and the router is randomly initialized, the model suffers from expert symmetry and limited early specialization. We propose Cluster-aware Upcycling, a strategy that incorporates semantic structure into MoE initialization. Our method first partitions the dense model's input activations into semantic clusters. Each expert is then initialized using the subspace representations of its corresponding cluster via truncated SVD, while setting the router's initial weights to the cluster centroids. This cluster-aware initialization breaks expert symmetry and encourages early specialization aligned with the data distribution. Furthermore, we introduce an expert-ensemble self-distillation loss that stabilizes training by providing reliable routing guidance using an ensemble teacher. When evaluated on CLIP ViT-B/32 and ViT-B/16, Cluster-aware Upcycling consistently outperforms existing methods across both zero-shot and few-shot benchmarks. The proposed method also produces more diverse and disentangled expert representations, reduces inter-expert similarity, and leads to more confident routing behavior.