Search papers, labs, and topics across Lattice.
This paper introduces Token-level Adaptive Routing (TARo), a novel test-time alignment method for steering frozen LLMs towards structured reasoning. TARo trains reward models on step-wise mathematical traces to capture logical consistency and uses a learnable token-level router to guide the base model with the reward model's signal. Experiments demonstrate that TARo improves reasoning performance by up to 22.4% over the base model and 8.4% over existing token-level test-time alignment methods, while also generalizing across domains and model sizes.
Achieve significant reasoning gains in frozen LLMs (+22.4%) without retraining by adaptively routing reward model guidance at the token level during inference.
Large language models (LLMs) exhibit strong reasoning capabilities but typically require expensive post-training to reach high performance. Recent test-time alignment methods offer a lightweight alternative, but have been explored mainly for preference alignment rather than reasoning. To bridge this gap, we propose, Token-level Adaptive Routing (TARo), which steers frozen LLMs toward structured reasoning entirely at inference time. Specifically, we first train reward models on step-wise mathematical traces to capture fine-grained logical consistency signals, then introduce a learnable token-level router that automatically controls the guidance of the reward model to the base model. Extensive experiments show that TARo significantly improves reasoning performance by up to +22.4% over base model and +8.4% over existing token-level test-time alignment methods, while also boosting out-of-distribution clinical reasoning (MedXpertQA) and instruction following (AlpacaEval). Furthermore, TARo also generalizes from small to large backbones without retraining, extending test-time alignment from preference optimization to robust, cross-domain reasoning.