Search papers, labs, and topics across Lattice.
The paper introduces ReBalance, a training-free framework that dynamically adjusts the reasoning process of Large Reasoning Models (LRMs) to mitigate overthinking and underthinking. ReBalance uses confidence variance and consistent overconfidence as indicators of reasoning dynamics, steering the model's hidden states towards more efficient trajectories using a vector computed from reasoning mode prototypes. Experiments across various models and benchmarks demonstrate that ReBalance improves accuracy and reduces redundancy, offering a practical strategy for efficient LRM deployment.
LRMs can be made both more accurate AND more efficient, without training, by dynamically steering their reasoning trajectories based on real-time confidence metrics.
Large Reasoning Models (LRMs) have shown remarkable reasoning capabilities, yet they often suffer from overthinking, expending redundant computational steps on simple problems, or underthinking, failing to explore sufficient reasoning paths despite inherent capabilities. These issues lead to inefficiencies and potential inaccuracies, limiting practical deployment in resource-constrained settings. Existing methods to mitigate overthinking, such as suppressing reflective keywords or adjusting reasoning length, may inadvertently induce underthinking, compromising accuracy. Therefore, we propose ReBalance, a training-free framework that achieves efficient reasoning with balanced thinking. ReBalance leverages confidence as a continuous indicator of reasoning dynamics, identifying overthinking through high confidence variance and underthinking via consistent overconfidence. By aggregating hidden states from a small-scale dataset into reasoning mode prototypes, we compute a steering vector to guide LRMs'reasoning trajectories. A dynamic control function modulates this vector's strength and direction based on real-time confidence, pruning redundancy during overthinking, and promoting exploration during underthinking. Extensive experiments conducted on four models ranging from 0.5B to 32B, and across nine benchmarks in math reasoning, general question answering, and coding tasks demonstrate that ReBalance effectively reduces output redundancy while improving accuracy, offering a general, training-free, and plug-and-play strategy for efficient and robust LRM deployment. Project page and code are available at https://rebalance-ai.github.io .