Search papers, labs, and topics across Lattice.
School of Mathematics, Harbin Institute of Technology, Harbin, China Correspondence: lijun.zhang@brgroup.com Abstract Activation steering provides parameter-efficient control over large language models (LLMs) at inference time, but many methods rely on off-distribution supervision and discrete masking, leading to brittle interventions. We propose ROAST (Rollout-based On-distribution Activation Steering Technique), which estimates steering directions from the model鈥檚 own on-distribution rollouts via ROC and avoids hard sparsification via Continuous Soft Scaling (CSS) and Grouped Mean Normalization. Our empirical analysis reveals that while activation magnitude correlates moderately with directional consistency, the variance in magnitude is significant and often disproportionate to semantic quality. This suggests that high-magnitude activations risk dominating the global steering direction if not properly normalized. To address this, ROAST employs grouped normalization to balance contributions across samples, ensuring a more robust estimation of the consensus steering direction. Across models (0.
1
0
2
Forget brittle, off-distribution steering: ROAST leverages on-distribution rollouts and normalization to achieve significant gains (+9.7% on GSM8K, +12.1% on TruthfulQA) by carefully balancing activation contributions.