Search papers, labs, and topics across Lattice.
This paper introduces GRAPE, a global pruning strategy for sparse Mixture-of-Experts (MoEs) that dynamically allocates pruning budgets across layers based on redundancy. GRAPE addresses the limitations of uniform pruning by exploiting the heterogeneous redundancy present in MoEs, leading to more efficient parameter reduction. Experiments across Mixtral, DeepSeek-MoE, Qwen-MoE, and GPT-OSS models demonstrate that GRAPE consistently outperforms local pruning baselines, achieving up to 2.45% accuracy gains under the same pruning budget.
MoEs can be pruned more effectively by considering cross-layer redundancy, leading to significant performance gains compared to uniform pruning strategies.
Empirical scaling laws for language models have encouraged the development of ever-larger LLMs, despite their growing computational and memory costs. Sparse Mixture-of-Experts (MoEs) offer a promising alternative by activating only a subset of experts per forward pass, improving efficiency without sacrificing performance. However, the large number of expert parameters still leads to substantial memory consumption. Existing pruning methods typically allocate budgets uniformly across layers, overlooking the heterogeneous redundancy that arises in sparse MoEs. We propose GRAPE (Global Redundancy-Aware Pruning of Experts, a global pruning strategy that dynamically allocates pruning budgets based on cross-layer redundancy. Experiments on Mixtral-8x7B, Mixtral-8x22B, DeepSeek-MoE, Qwen-MoE, and GPT-OSS show that, under the same pruning budget, GRAPE consistently achieves the best average performance. On the three main models reported in the paper, it improves average accuracy over the strongest local baseline by 1.40% on average across pruning settings, with gains of up to 2.45%.