Search papers, labs, and topics across Lattice.
2
347
3
13
Even reward models that get the right answer can be dangerously wrong in their reasoning, leading to worse RLHF outcomes, but R-Align fixes this by explicitly aligning rationales with gold standard judgments.
Forget complex RLHF pipelines: simple PPO with rule-based rewards can outperform state-of-the-art reasoning models while slashing training costs by 90%.