Search papers, labs, and topics across Lattice.
IceCache addresses the memory bottleneck of KV caches in long-sequence LLMs by integrating semantic token clustering with PagedAttention for efficient CPU-GPU offloading. It organizes semantically related tokens into contiguous memory regions, enabling more precise token selection and improved memory bandwidth utilization during transfers. Experiments on LongBench demonstrate that IceCache achieves 99% of the full KV cache accuracy with only a 256-token budget, outperforming existing offloading methods in both latency and accuracy while using significantly less KV cache.
LLMs can maintain near-perfect accuracy on long sequences with only 25% of the KV cache, thanks to a novel semantic clustering approach that dramatically improves CPU-GPU offloading.
Key-Value (KV) cache plays a crucial role in accelerating inference in large language models (LLMs) by storing intermediate attention states and avoiding redundant computation during autoregressive generation. However, its memory footprint scales linearly with sequence length, often leading to severe memory bottlenecks on resource-constrained hardware. Prior work has explored offloading KV cache to the CPU while retaining only a subset on the GPU, but these approaches often rely on imprecise token selection and suffer performance degradation in long-generation tasks such as chain-of-thought reasoning. In this paper, we propose a novel KV cache management strategy, IceCache, which integrates semantic token clustering with PagedAttention. By organizing semantically related tokens into contiguous memory regions managed by a hierarchical, dynamically updatable data structure, our method enables more efficient token selection and better utilization of memory bandwidth during CPU-GPU transfers. Experimental results on LongBench show that, with a 256-token budget, IceCache maintains 99% of the original accuracy achieved by the full KV cache model. Moreover, compared to other offloading-based methods, IceCache attains competitive or even superior latency and accuracy while using only 25% of the KV cache token budget, demonstrating its effectiveness in long-sequence scenarios. The code is available on our project website at https://yuzhenmao.github.io/IceCache/.