Search papers, labs, and topics across Lattice.
RACER, a novel training-free speculative decoding method, combines retrieval of exact context matches with logit-based future token cues to generate richer speculative drafts. By integrating these two approaches, RACER overcomes the limitations of purely retrieval-based or logit-based methods, leading to more accurate and efficient drafts. Experiments show RACER achieves over 2x speedup compared to autoregressive decoding and outperforms existing training-free speculative decoding methods on benchmarks like Spec-Bench and HumanEval.
LLM inference gets a 2x speed boost without training, thanks to a clever technique that merges retrieval with logit-based speculation.
Autoregressive decoding in Large Language Models (LLMs) generates one token per step, causing high inference latency. Speculative decoding (SD) mitigates this through a guess-and-verify strategy, but existing training-free variants face trade-offs: retrieval-based drafts break when no exact match exists, while logits-based drafts lack structural guidance. We propose $\textbf{RACER}$ ($\textbf{R}$etrieval-$\textbf{A}$ugmented $\textbf{C}$ont$\textbf{e}$xtual $\textbf{R}$apid Speculative Decoding), a lightweight and training-free method that integrates retrieved exact patterns with logit-driven future cues. This unification supplies both reliable anchors and flexible extrapolation, yielding richer speculative drafts. Experiments on Spec-Bench, HumanEval, and MGSM-ZH demonstrate that RACER consistently accelerates inference, achieving more than $2\times$ speedup over autoregressive decoding, and outperforms prior training-free methods, offering a scalable, plug-and-play solution for efficient LLM decoding. Our source code is available at $\href{https://github.com/hkr04/RACER}{https://github.com/hkr04/RACER}$.