Search papers, labs, and topics across Lattice.
This paper introduces decoding-aligned KV cache compression via position-aware pseudo queries (DapQ), a method for selectively evicting less important tokens from the KV cache during LLM inference. DapQ constructs pseudo-queries based on positional information to simulate decoding-stage queries, creating a more accurate observation window for assessing token importance compared to input-side attention. Experiments across various benchmarks and LLMs demonstrate that DapQ achieves superior performance, especially under tight memory constraints, exhibiting near-lossless performance at a 3% KV cache budget on NIAH.
Forget content, remember position: crafting pseudo-queries based on token position alone yields surprisingly effective KV cache compression for LLMs, rivaling methods that analyze input semantics.
The Key-Value (KV) cache is crucial for efficient Large Language Models (LLMs) inference, but excessively long contexts drastically increase KV cache memory footprint. Existing KV cache compression methods typically rely on input-side attention patterns within a prompt observation window to estimate token importance during the prefill stage. They fail to preserve critical tokens for future generation since these assessments are not derived from the decoding process. Intuitively, an effective observation window should mirror the decoding-stage queries to accurately reflect which tokens the generation process will attend to. However, ground-truth decoding queries are inherently unavailable during inference. For constructing pseudo queries to approximate them, we find that positional information plays a more critical role than semantic content. Motivated by this insight, we propose decoding-aligned KV cache compression via position-aware pseudo queries (DapQ), a novel and lightweight eviction framework that leverages position-aware pseudo queries to simulate the output tokens, thereby establishing an effective observation window for importance assessment. It aligns closely with the actual generation context and enables precise token eviction. Extensive evaluations across multiple benchmarks and LLMs demonstrate that DapQ achieves superior performance, particularly under strict memory constraints (e.g., up to nearly lossless performance 99.5% on NIAH with 3% KV cache budgets).