Search papers, labs, and topics across Lattice.
This paper introduces Constrained Group Relative Policy Optimization (C-GRPO), a Lagrangian-based extension of Group Relative Policy Optimization (GRPO) for constrained policy optimization using indicator cost functions. The authors identify and formally derive a pathology in naive multi-component advantage estimation that corrupts the Lagrangian signal due to mismatched component-wise standard deviations. They propose a scalarized advantage construction to address this issue, demonstrating improved constraint satisfaction and task success in both a toy gridworld and robotics tasks.
Mismatched standard deviations in multi-objective RL advantage estimation can completely break constrained learning, but a simple scalarization fixes it.
While Group Relative Policy Optimization (GRPO) has emerged as a scalable framework for critic-free policy learning, extending it to settings with explicit behavioral constraints remains underexplored. We introduce Constrained GRPO, a Lagrangian-based extension of GRPO for constrained policy optimization. Constraints are specified via indicator cost functions, enabling direct optimization of violation rates through a Lagrangian relaxation. We show that a naive multi-component treatment in advantage estimation can break constrained learning: mismatched component-wise standard deviations distort the relative importance of the different objective terms, which in turn corrupts the Lagrangian signal and prevents meaningful constraint enforcement. We formally derive this effect to motivate our scalarized advantage construction that preserves the intended trade-off between reward and constraint terms. Experiments in a toy gridworld confirm the predicted optimization pathology and demonstrate that scalarizing advantages restores stable constraint control. In addition, we evaluate Constrained GRPO on robotics tasks, where it improves constraint satisfaction while increasing task success, establishing a simple and effective recipe for constrained policy optimization in embodied AI domains that increasingly rely on large multimodal foundation models.