Search papers, labs, and topics across Lattice.
The paper investigates spurious signal amplification in test-time reinforcement learning (TTRL) for math reasoning, identifying medium-consistency responses as a key source of reward noise that is exacerbated by group-relative advantage estimation. To address this, they propose Debiased and Denoised test-time Reinforcement Learning (DDRL), which uses frequency-based sampling to exclude ambiguous samples, debiased advantage estimation with fixed advantages, and consensus-based off-policy refinement. Experiments show that DDRL outperforms existing TTRL baselines on mathematical reasoning benchmarks.
Test-time RL's vulnerability to noisy pseudo-labels is amplified by group-relative advantage estimation, but can be mitigated with a surprisingly simple debiasing and denoising approach.
Test-time reinforcement learning (TTRL) always adapts models at inference time via pseudo-labeling, leaving it vulnerable to spurious optimization signals from label noise. Through an empirical study, we observe that responses with medium consistency form an ambiguity region and constitute the primary source of reward noise. Crucially, we find that such spurious signals can be even amplified through group-relative advantage estimation. Motivated by these findings, we propose a unified framework, Debiased and Denoised test-time Reinforcement Learning (DDRL), to mitigate spurious signals. Concretely, DDRL first applies a frequency-based sampling strategy to exclude ambiguous samples while maintaining a balanced set of positive and negative examples. It then adopts a debiased advantage estimation with fixed advantages, removing the bias introduced by group-relative policy optimization. Finally, DDRL incorporates a consensus-based off-policy refinement stage, which leverages the rejection-sampled dataset to enable efficient and stable model updates. Experiments on three large language models across multiple mathematical reasoning benchmarks demonstrate that DDRL consistently outperforms existing TTRL baselines. The code will soon be released at https://github.com/yuyongcan/DDRL.