Search papers, labs, and topics across Lattice.
LookaheadKV is introduced as a lightweight KV cache eviction framework that avoids explicit draft generation by training parameter-efficient modules to predict importance scores. This approach augments transformer layers to predict importance scores, achieving accuracy comparable to methods using computationally expensive draft generation. Experiments show LookaheadKV outperforms baselines on long-context tasks and reduces eviction cost by up to 14.5x, leading to faster time-to-first-token.
Imagine getting the accuracy boost of "glimpsing into the future" for KV cache eviction, but without the hefty cost of draft generation – LookaheadKV makes it real.
Transformer-based large language models (LLMs) rely on key-value (KV) caching to avoid redundant computation during autoregressive inference. While this mechanism greatly improves efficiency, the cache size grows linearly with the input sequence length, quickly becoming a bottleneck for long-context tasks. Existing solutions mitigate this problem by evicting prompt KV that are deemed unimportant, guided by estimated importance scores. Notably, a recent line of work proposes to improve eviction quality by"glimpsing into the future", in which a draft generator produces a surrogate future response approximating the target model's true response, and this surrogate is subsequently used to estimate the importance of cached KV more accurately. However, these approaches rely on computationally expensive draft generation, which introduces substantial prefilling overhead and limits their practicality in real-world deployment. To address this challenge, we propose LookaheadKV, a lightweight eviction framework that leverages the strength of surrogate future response without requiring explicit draft generation. LookaheadKV augments transformer layers with parameter-efficient modules trained to predict true importance scores with high accuracy. Our design ensures negligible runtime overhead comparable to existing inexpensive heuristics, while achieving accuracy superior to more costly approximation methods. Extensive experiments on long-context understanding benchmarks, across a wide range of models, demonstrate that our method not only outperforms recent competitive baselines in various long-context understanding tasks, but also reduces the eviction cost by up to 14.5x, leading to significantly faster time-to-first-token. Our code is available at https://github.com/SamsungLabs/LookaheadKV.