Search papers, labs, and topics across Lattice.
This paper identifies a gradient conflict between optimizing for accuracy and calibration in Reinforcement Learning from Verifiable Rewards (RLVR) for LLMs. To address this, they propose Decoupled Policy Optimization (DCPO), a framework that separates reasoning and calibration objectives during training. Experiments show that DCPO maintains accuracy comparable to GRPO while significantly improving calibration and reducing overconfidence.
LLMs trained with reinforcement learning from verifiable rewards (RLVR) become overconfident in incorrect answers, but a simple fix鈥攄ecoupling reasoning and calibration objectives鈥攃an restore proper calibration without sacrificing accuracy.
Reinforcement Learning from Verifiable Rewards (RLVR) significantly enhances large language models (LLMs) reasoning but severely suffers from calibration degeneration, where models become excessively over-confident in incorrect answers. Previous studies devote to directly incorporating calibration objective into existing optimization target. However, our theoretical analysis demonstrates that there exists a fundamental gradient conflict between the optimization for maximizing policy accuracy and minimizing calibration error. Building on this insight, we propose DCPO, a simple yet effective framework that systematically decouples reasoning and calibration objectives. Extensive experiments demonstrate that our DCPO not only preserves accuracy on par with GRPO but also achieves the best calibration performance and substantially mitigates the over-confidence issue. Our study provides valuable insights and practical solution for more reliable LLM deployment.