Search papers, labs, and topics across Lattice.
This paper analyzes answer-to-reasoning attention patterns in LLMs performing quantitative reasoning, finding that correct answers exhibit a "benign self-reading pattern" characterized by forward-drifting attention and concentration on key semantic anchors. Incorrect solutions, conversely, show diffuse and irregular attention. Based on these observations, the authors propose a training-free steering method using Self-Reading Quality (SRQ) scores to guide inference towards more reliable reasoning integration, achieving consistent accuracy gains.
LLMs signal their internal certainty during answer decoding through predictable attention patterns on their own reasoning traces.
Thinking LLMs produce reasoning traces before answering. Prior activation steering work mainly targets on shaping these traces. It remains less understood how answer tokens actually read and integrate the reasoning to produce reliable outcomes. Focusing on quantitative reasoning, we analyze the answer-to-reasoning attention and observe a benign self-reading pattern aligned with correctness, characterized by a forward drift of the reading focus along the reasoning trace and a persistent concentration on key semantic anchors, whereas incorrect solutions exhibit diffuse and irregular attention pattern. We interpret this as internal certainty during answer decoding, where the model commits to a viable solution branch and integrates key evidence. Following this, we propose a training-free steering method driven by Self-Reading Quality (SRQ) scores combining geometric metrics for process control with semantic metrics for content monitoring. SRQ selects data to build steering vectors that guide inference toward benign self-reading and away from uncertain and disorganized reading. Experiments show that our method yields consistent accuracy gains.