Search papers, labs, and topics across Lattice.
This paper investigates the reasoning fidelity of Generative Reward Models (GenRMs) in Reinforcement Learning from Human Feedback (RLHF), revealing that high "Spurious Correctness" (S-Corr), where rationales are misaligned with golden judgments despite correct preference prediction, leads to policy degeneration during optimization. To address this, they introduce Rationale-Centric Alignment (R-Align), a training method that incorporates gold judgments to explicitly supervise rationale alignment. Experiments demonstrate that R-Align reduces S-Corr and improves actor performance across various tasks, including STEM, coding, instruction following, and general tasks.
Even reward models that get the right answer can be dangerously wrong in their reasoning, leading to worse RLHF outcomes, but R-Align fixes this by explicitly aligning rationales with gold standard judgments.
Reinforcement Learning from Human Feedback (RLHF) remains indispensable for aligning large language models (LLMs) in subjective domains. To enhance robustness, recent work shifts toward Generative Reward Models (GenRMs) that generate rationales before predicting preferences. Yet in GenRM training and evaluation, practice remains outcome-label-only, leaving reasoning quality unchecked. We show that reasoning fidelity-the consistency between a GenRM's preference decision and reference decision rationales-is highly predictive of downstream RLHF outcomes, beyond standard label accuracy. Specifically, we repurpose existing reward-model benchmarks to compute Spurious Correctness (S-Corr)-the fraction of label-correct decisions with rationales misaligned with golden judgments. Our empirical evaluation reveals substantial S-Corr even for competitive GenRMs, and higher S-Corr is associated with policy degeneration under optimization. To improve fidelity, we propose Rationale-Centric Alignment, R-Align, which augments training with gold judgments and explicitly supervises rationale alignment. R-Align reduces S-Corr on RM benchmarks and yields consistent gains in actor performance across STEM, coding, instruction following, and general tasks.