Search papers, labs, and topics across Lattice.
The paper introduces Step-Saliency, a method for analyzing information flow in large reasoning models (LRMs) by pooling attention-gradient scores into step-to-step maps. This analysis reveals two key failure modes: Shallow Lock-in, where shallow layers over-focus on the current step, and Deep Decay, where deep layers lose saliency on the thinking segment. Based on these findings, the authors propose StepFlow, a test-time intervention that adjusts shallow saliency patterns and adds step-level residual connections in deep layers, improving accuracy on reasoning tasks without retraining.
LRMs often fail at reasoning tasks not because they lack knowledge, but because of information flow bottlenecks between reasoning steps, which can be partially fixed post-hoc.
Large reasoning models (LRMs) that generate long chains of thought now perform well on multi-step math, science, and coding tasks. However, their behavior is still unstable and hard to interpret, and existing analysis tools struggle with such long, structured reasoning traces. We introduce Step-Saliency, which pools attention--gradient scores into step-to-step maps along the question--thinking--summary trajectory. Across several models, Step-Saliency reveals two recurring information-flow failures: Shallow Lock-in, where shallow layers over-focus on the current step and barely use earlier context, and Deep Decay, where deep layers gradually lose saliency on the thinking segment and the summary increasingly attends to itself and the last few steps. Motivated by these patterns, we propose StepFlow, a saliency-inspired test-time intervention that adjusts shallow saliency patterns measured by Step-Saliency via Odds-Equal Bridge and adds a small step-level residual in deep layers via Step Momentum Injection. StepFlow improves accuracy on math, science, and coding tasks across multiple LRMs without retraining, indicating that repairing information flow can recover part of their missing reasoning performance.