Search papers, labs, and topics across Lattice.
Pichay, a demand paging system for LLM context windows, was developed to address the problem of inefficient context utilization in LLMs, where all information occupies context for the entire session. By acting as a transparent proxy, Pichay evicts stale content, detects page faults, and pins working-set pages based on fault history. Results from offline replay and live production deployment demonstrate significant context consumption reduction (up to 93%) with low fault rates, highlighting the potential of virtual memory techniques for optimizing LLM performance.
LLMs waste 21.8% of their context window on structural inefficiencies, but a demand paging system can slash context consumption by up to 93% without sacrificing performance.
The context window of a large language model is not memory. It is L1 cache: a small, fast, expensive resource that the field treats as the entire memory system. There is no L2, no virtual memory, no paging. Every tool definition, every system prompt, and every stale tool result occupies context for the lifetime of the session. The result is measurable: across 857 production sessions and 4.45 million effective input tokens, 21.8% is structural waste. We present Pichay, a demand paging system for LLM context windows. Implemented as a transparent proxy between client and inference API, Pichay interposes on the message stream to evict stale content, detect page faults when the model re-requests evicted material, and pin working-set pages identified by fault history. In offline replay across 1.4 million simulated evictions, the fault rate is 0.0254%. In live production deployment over 681turns, the system reduces context consumption by up to 93% (5,038KB to 339KB); under extreme sustained pressure, the system remains operational but exhibits the expected thrashing pathology, with repeated fault-in of evicted content. The key observation is that the problems the field faces, such as context limits, attention degradation, cost scaling, lost state across sessions, are virtual memory problems wearing different clothes. The solutions exist: working set theory (Denning, 1968), demand paging, fault-driven replacement policies, and memory hierarchies with multiple eviction-managed levels. We describe the architecture of a full memory hierarchy for LLM systems (L1 through persistent storage), report on the first three levels deployed in production use (L1 eviction, L2 fault-driven pinning, L3 model-initiated conversation compaction), and identify cross-session memory as the remaining frontier.