Search papers, labs, and topics across Lattice.
This paper introduces ToolSpec, a novel speculative decoding method tailored for accelerating tool calling in LLMs by leveraging the structured nature of tool interactions. ToolSpec uses predefined tool schemas to guide draft generation via a finite-state machine and retrieves similar historical tool invocations for reuse. Experiments show ToolSpec achieves up to 4.2x speedup compared to existing training-free speculative decoding methods across multiple benchmarks.
Tool calling gets a 4x speed boost without training by exploiting structured schemas and retrieval of past invocations.
Tool calling has greatly expanded the practical utility of large language models (LLMs) by enabling them to interact with external applications. As LLM capabilities advance, effective tool use increasingly involves multi-step, multi-turn interactions to solve complex tasks. However, the resulting growth in tool interactions incurs substantial latency, posing a key challenge for real-time LLM serving. Through empirical analysis, we find that tool-calling traces are highly structured, conform to constrained schemas, and often exhibit recurring invocation patterns. Motivated by this, we propose ToolSpec, a schema-aware, retrieval-augmented speculative decoding method for accelerating tool calling. ToolSpec exploits predefined tool schemas to generate accurate drafts, using a finite-state machine to alternate between deterministic schema token filling and speculative generation for variable fields. In addition, ToolSpec retrieves similar historical tool invocations and reuses them as drafts to further improve efficiency. ToolSpec presents a plug-and-play solution that can be seamlessly integrated into existing LLM workflows. Experiments across multiple benchmarks demonstrate that ToolSpec achieves up to a 4.2x speedup, substantially outperforming existing training-free speculative decoding methods.