Search papers, labs, and topics across Lattice.
The paper addresses the problem of unreliable steering vectors for controlling reasoning behaviors in LLMs, showing that keyword-matching methods for identifying behavioral signals in hidden states are highly unstable. They introduce a probabilistic model to formalize intrinsic reasoning behaviors and propose stability filtering to retain only boundaries where the model consistently reproduces the target behavior. By combining stability filtering with content-subspace projection, they achieve improved accuracy on MATH-500 and demonstrate transferability across models.
93% of "reasoning steps" identified by keyword matching are actually noise, but a simple stability filter and content-subspace projection can boost steering vector performance by 5-6% and enable cross-model transfer.
Steering vectors offer a training-free mechanism for controlling reasoning behaviors in large language models, but constructing effective vectors requires identifying genuine behavioral signals in the model's hidden states. For behaviors that can be toggled via prompts, this is straightforward. However, many reasoning behaviors -- such as self-reflection -- emerge spontaneously and resist prompt-level control. Current methods detect these behaviors through keyword matching in chain-of-thought traces, implicitly assuming that every detected boundary encodes a genuine behavioral signal. We show that this assumption is overwhelmingly wrong: across 541 keyword-detected boundaries, 93.3\% are behaviorally unstable, failing to reproduce the detected behavior under re-generation from the same prefix. We develop a probabilistic model that formalizes intrinsic reasoning behaviors as stochastic events with context-dependent trigger probabilities, and show that unstable boundaries dilute the steering signal. Guided by this analysis, we propose stability filtering, which retains only boundaries where the model consistently reproduces the target behavior. Combined with a content-subspace projection that removes residual question-specific noise, our method achieves 0.784 accuracy on MATH-500 (+5.0 over the strongest baseline). The resulting steering vectors transfer across models in the same architecture family without re-extraction, improving Nemotron-Research-Reasoning-1.5B (+5.0) and DeepScaleR-1.5B-Preview (+6.0). Code is available at https://github.com/zhmzm/stability-steering.