Search papers, labs, and topics across Lattice.
This paper identifies an implicit advantage symmetry within Group Relative Advantage Estimation (GRAE) used in GRPO, a standard Reinforcement Learning with Verifiable Rewards (RLVR) approach for eliciting LLM reasoning, and argues that this symmetry hinders exploration and difficulty adaptation. They show that this symmetry leads to unchanged action logits for unsampled correct solutions and prioritizes medium-difficulty samples, which is suboptimal. They introduce Asymmetric GRAE (A-GRAE) to dynamically modulate exploration incentives and sample-difficulty focus, demonstrating improved performance over GRPO and its variants across seven benchmarks.
GRPO's struggle with exploration and difficulty adaptation in LLM reasoning stems from a previously unnoticed symmetry in its advantage estimation, which can be overcome by asymmetrically weighting correct vs. incorrect trajectories.
Reinforcement Learning with Verifiable Rewards (RLVR), particularly GRPO, has become the standard for eliciting LLM reasoning. However, its efficiency in exploration and difficulty adaptation remains an open challenge. In this work, we argue that these bottlenecks stem from an implicit advantage symmetry inherent in Group Relative Advantage Estimation (GRAE). This symmetry induces two critical limitations: (i) at the group level, strict symmetry in weights between correct and incorrect trajectories leaves unsampled action logits unchanged, thereby hindering exploration of novel correct solution. (ii) at the sample level, the algorithm implicitly prioritizes medium-difficulty samples, remaining agnostic to the non-stationary demands of difficulty focus. Through controlled experiments, we reveal that this symmetric property is sub-optimal, yielding two pivotal insights: (i) asymmetrically suppressing the advantages of correct trajectories encourages essential exploration. (ii) learning efficiency is maximized by a curriculum-like transition-prioritizing simpler samples initially before gradually shifting to complex ones. Motivated by these findings, we propose Asymmetric GRAE (A-GRAE), which dynamically modulates exploration incentives and sample-difficulty focus. Experiments across seven benchmarks demonstrate that A-GRAE consistently improves GRPO and its variants across both LLMs and MLLMs.