Search papers, labs, and topics across Lattice.
This paper identifies a key flaw in current Reinforcement Learning from Verifiable Rewards (RLVR) frameworks for Large Vision-Language Models (LVLMs): the equal distribution of advantages across all tokens dilutes learning signals for visually-grounded reasoning. To address this, they introduce Perception-Grounded Policy Optimization (PGPO), a novel fine-grained credit assignment framework that dynamically reshapes advantages at the token level based on "Token Visual Dependency," quantified using KL divergence. Experiments across seven benchmarks demonstrate that PGPO significantly boosts model performance by 18.7% on average, reducing gradient variance and improving robustness.
LVLMs can be boosted by 18.7% simply by focusing RLHF training on the few tokens that actually depend on visual input.
While Reinforcement Learning from Verifiable Rewards (RLVR) has advanced reasoning in Large Vision-Language Models (LVLMs), prevailing frameworks suffer from a foundational methodological flaw: by distributing identical advantages across all generated tokens, these methods inherently dilute the learning signals essential for optimizing the critical, visually-grounded steps of multimodal reasoning. To bridge this gap, we formulate \textit{Token Visual Dependency}, quantifying the causal information gain of visual inputs via the Kullback-Leibler (KL) divergence between visual-conditioned and text-only predictive distributions. Revealing that this dependency is highly sparse and semantically pivotal, we introduce Perception-Grounded Policy Optimization (PGPO), which is a novel fine-grained credit assignment framework that dynamically reshapes advantages at the token level. Through a threshold-gated, mass-conserving mechanism, PGPO actively amplifies learning signals for visually-dependent tokens while suppressing gradient noise from linguistic priors. Extensive experiments based on the Qwen2.5-VL series across seven challenging multimodal reasoning benchmarks demonstrate that PGPO boosts models by 18.7% on average. Both theoretical and empirical analyses confirm that PGPO effectively reduces gradient variance, prevents training collapse, and acts as a potent regularizer for robust, perception-grounded multimodal reasoning. Code will be published on https://github.com/Yzk1114/PGPO.