Search papers, labs, and topics across Lattice.
This paper introduces HiLight, an Evidence Emphasis framework designed to enhance the performance of frozen Large Language Models (LLMs) by decoupling evidence selection from reasoning. By training a lightweight Emphasis Actor to insert highlight tags around crucial spans in the original context without altering it, HiLight improves the LLM's ability to identify and utilize decisive evidence. The approach shows significant performance gains in sequential recommendation and long-context question answering tasks, demonstrating its effectiveness across various Solver families without requiring evidence labels or modifications to the Solver itself.
Highlighting pivotal evidence can boost LLM performance without altering the original context, leading to substantial improvements in reasoning tasks.
Large Language Models (LLMs) can reason well, yet often miss decisive evidence when it is buried in long, noisy contexts. We introduce HiLight, an Evidence Emphasis framework that decouples evidence selection from reasoning for frozen LLM solvers. HiLight avoids compressing or rewriting the input, which can discard or distort evidence, by training a lightweight Emphasis Actor to insert minimal highlight tags around pivotal spans in the unaltered context. A frozen Solver then performs downstream reasoning on the emphasized input. We cast highlighting as a weakly supervised decision-making problem and optimize the Actor with reinforcement learning using only the Solver's task reward, requiring no evidence labels and no access to or modification of the Solver. Across sequential recommendation and long-context question answering, HiLight consistently improves performance over strong prompt-based and automated prompt-optimization baselines. The learned emphasis policy transfers zero-shot to both smaller and larger unseen Solver families, including an API-based Solver, suggesting that the Actor captures genuine, reusable evidence structure rather than overfitting to a single backbone.