Search papers, labs, and topics across Lattice.
This paper introduces Prefill Token Equivalents (PTE), a hardware-aware metric for evaluating the efficiency of Tool-Integrated Reasoning (TIR) in LLMs, accounting for KV-cache eviction and long tool responses. The authors validate PTE against wall-clock latency in an industrial setting, demonstrating its superior alignment compared to standard token counts. Through extensive experiments on five TIR benchmarks, they identify four inefficiency patterns and show that higher PTE costs correlate with lower reasoning correctness.
Current LLM efficiency metrics fail to capture the true cost of tool use, as measured by wall-clock latency, but a new hardware-aware metric closes the gap.
In real-world Tool-Integrated Reasoning (TIR) scenarios, where LLMs interleave reasoning with external tool calls, a major source of inefficiency is that the toolcalls create pauses between LLM requests and cause KV-Cache eviction, forcing recomputation. Also, the long, unfiltered response returned by external tools inflates the KV-Cache, so each decode step spends more time loading the growing cache and thus becomes steadily slower as context length increases. However, existing efficiency metrics like token counts and toolcall counts fail to capture the real model inference latency. To address this, we introduce PTE (Prefill Token Equivalents), a hardware-aware TIR-efficiency metric that unifies internal reasoning and external tool-use costs while explicitly accounting for non-reusable KV-Cache and long-tool-response scenarios. Validation in a high-concurrency industrial setting indicates that PTE aligns significantly better with wall-clock latency than standard token counts, while maintaining consistent efficiency rankings across diverse hardware profiles. We conduct extensive experiments across five TIR benchmarks, quantify their PTE costs, and identify four inefficiency patterns that appear in TIR. We also discover that trajectories with higher PTE costs tend to have lower reasoning correctness, indicating that simply using more tools does not improve the quality of the answer.