Search papers, labs, and topics across Lattice.
This paper introduces Uncertainty-Triggered Adaptive Context Allocation (UT-ACA), a framework that dynamically adjusts the context window size during LLM inference based on token-level uncertainty. UT-ACA trains an uncertainty detector using semantic embeddings and logit confidence, rolling back and expanding the context window when uncertainty exceeds a threshold. Experiments demonstrate that UT-ACA reduces average context usage while maintaining generation quality in long-context scenarios.
LLMs can maintain generation quality in long-context scenarios while using significantly less context, simply by adaptively allocating context based on uncertainty.
Long-context inference remains challenging for large language models due to attention dilution and out-of-distribution degradation. Context selection mitigates this limitation by attending to a subset of key-value cache entries, yet most methods allocate a fixed context budget throughout decoding despite highly non-uniform token-level contextual demands. To address this issue, we propose Uncertainty-Triggered Adaptive Context Allocation (UT-ACA), an inference-time framework that dynamically adjusts the context window based on token-wise uncertainty. UT-ACA learns an uncertainty detector that combines semantic embeddings with logit-based confidence while accounting for uncertainty accumulation across decoding steps. When insufficient evidence is indicated, UT-ACA selectively rolls back, expands the context window, and regenerates the token with additional support. Experiments show that UT-ACA substantially reduces average context usage while preserving generation quality in long-context settings.