Search papers, labs, and topics across Lattice.
This paper introduces LooComp, a margin-based framework for query-driven context pruning in retrieval-augmented generation (RAG). LooComp identifies critical sentences by measuring the change in clue richness when they are omitted, training an encoder-only Transformer with a composite ranking loss to enforce margins for critical sentences. Experiments demonstrate that LooComp achieves strong exact-match and F1 scores with high-throughput inference and lower memory requirements, offering effective compression ratios without degrading answering performance.
Achieve RAG efficiency without sacrificing accuracy: LooComp prunes context by identifying and retaining only the most critical sentences for answering a query.
Efficient context compression is crucial for improving the accuracy and scalability of question answering. For the efficiency of Retrieval Augmented Generation, context should be delivered fast, compact, and precise to ensure clue sufficiency and budget-friendly LLM reader cost. We propose a margin-based framework for query-driven context pruning, which identifies sentences that are critical for answering a query by measuring changes in clue richness when they are omitted. The model is trained with a composite ranking loss that enforces large margins for critical sentences while keeping non-critical ones near neutral. Built on a lightweight encoder-only Transformer, our approach generally achieves strong exact-match and F1 scores with high-throughput inference and lower memory requirements than those of major baselines. In addition to efficiency, our method yields effective compression ratios without degrading answering performance, demonstrating its potential as a lightweight and practical alternative for retrieval-augmented tasks.