Search papers, labs, and topics across Lattice.
KV Packet introduces a novel context-independent KV caching mechanism for LLMs that avoids recomputation by treating cached documents as immutable "packets" wrapped in trainable soft-token adapters. These adapters are trained via self-supervised distillation to handle context shifts, enabling seamless integration of cached content without modifying the underlying KV states. Experiments on Llama-3.1 and Qwen2.5 show that KV Packet achieves near-zero FLOPs and lower TTFT compared to recomputation-based methods, while maintaining comparable F1 scores.
Achieve near-zero FLOPs and faster time-to-first-token by treating cached documents as immutable packets, eliminating the need for KV recomputation in LLMs.
Large Language Models (LLMs) rely heavily on Key-Value (KV) caching to minimize inference latency. However, standard KV caches are context-dependent: reusing a cached document in a new context requires recomputing KV states to account for shifts in attention distribution. Existing solutions such as CacheBlend, EPIC, and SAM-KV mitigate this issue by selectively recomputing a subset of tokens; however, they still incur non-negligible computational overhead (FLOPs) and increased Time-to-First-Token (TTFT) latency. In this paper, we propose KV Packet, a recomputation-free cache reuse framework that treats cached documents as immutable ``packets''wrapped in light-weight trainable soft-token adapters, which are trained via self-supervised distillation to bridge context discontinuities. Experiments on Llama-3.1 and Qwen2.5 demonstrate that the proposed KV Packet method achieves near-zero FLOPs and lower TTFT than recomputation-based baselines, while retaining F1 scores comparable to those of the full recomputation baseline.