Search papers, labs, and topics across Lattice.
This paper introduces Expert Threshold (ET) routing, a novel token routing mechanism for Mixture-of-Experts (MoE) language models that uses an EMA threshold per expert to dynamically allocate computation based on token scores. Unlike Token-Choice MoE, ET routing enables dynamic computation allocation and inherently balances load without auxiliary losses, making it particularly suitable for autoregressive language modeling. Experiments pretraining a 2.4B parameter model on FineWeb-Edu demonstrate that ET routing achieves a 0.067 reduction in cross-entropy loss compared to TC-MoE, equivalent to a 1.6x data efficiency improvement.
Forget auxiliary losses and fixed expert capacity: Expert Threshold routing dynamically allocates computation in MoEs and balances expert load, all while boosting data efficiency by 1.6x.
Token-choice Mixture-of-Experts (TC-MoE) routes each token to a fixed number of experts, limiting dynamic computation allocation and requiring auxiliary losses to maintain load balance. We propose Expert Threshold (ET) routing, where each expert maintains an exponential moving average (EMA) threshold estimated from the global token distribution. At both training and inference, each token is independently routed to an expert if its score exceeds the expert's threshold, enabling dynamic computation allocation while achieving load balance without auxiliary losses. This fully causal mechanism eliminates dependence on other tokens in the batch, making it well-suited for autoregressive language modeling. In pretraining experiments scaling to 2.4B parameters on FineWeb-Edu, ET achieves 0.067 lower cross-entropy loss than TC-MoE, equivalent to reaching the same performance with 1.6$\times$ fewer tokens.