Search papers, labs, and topics across Lattice.
This paper investigates the impact of draft model training data on the performance of speculative decoding. They train lightweight HASS and EAGLE-2 draft models on task-specific datasets (MathInstruct, ShareGPT) and evaluate them on MT-Bench, GSM8K, MATH-500, and SVAMP. Results show that task-specific training significantly improves acceptance length, and that confidence-based routing and merged-tree verification are effective strategies for combining specialized drafters at inference time.
Forget generic pre-training: Speculative decoding gets a serious speed boost when your draft model is a specialist trained on data matching the target task.
Speculative decoding accelerates autoregressive generation by letting a lightweight draft model propose future tokens that a larger target model then verifies in parallel. In practice, however, draft models are usually trained on broad generic corpora, which leaves it unclear how much speculative decoding quality depends on the draft training distribution. We study this question with lightweight HASS and EAGLE-2 drafters trained on MathInstruct, ShareGPT, and mixed-data variants, evaluated on MT-Bench, GSM8K, MATH-500, and SVAMP. Measured by acceptance length, task-specific training yields clear specialization: MathInstruct-trained drafts are strongest on reasoning benchmarks, while ShareGPT-trained drafts are strongest on MT-Bench. Mixed-data training improves robustness, but larger mixtures do not dominate across decoding temperatures. We also study how to combine specialized drafters at inference time. Naive checkpoint averaging performs poorly, whereas confidence-based routing improves over single-domain drafts and merged-tree verification yields the highest acceptance length overall for both backbones. Finally, confidence is a more useful routing signal than entropy: rejected tokens tend to have higher entropy, but confidence produces much clearer benchmark-level routing decisions. These results show that speculative decoding quality depends not only on draft architecture, but also on the match between draft training data and downstream workload, and that specialized drafters are better combined at inference time than in weight space.