Search papers, labs, and topics across Lattice.
This paper introduces Value Gradient Flow (VGF), a novel approach to behavior-regularized reinforcement learning that avoids explicit policy parameterization. VGF frames the RL problem as an optimal transport problem, mapping a reference distribution to a value-induced optimal policy distribution using discrete gradient flow. Experiments demonstrate that VGF achieves state-of-the-art performance on offline RL benchmarks and LLM RL tasks, outperforming existing methods.
Forget policy gradients: Value Gradient Flow (VGF) offers a simpler, more scalable way to align LLMs by directly optimizing value functions via optimal transport.
We study behavior-regularized reinforcement learning (RL), where regularization toward a reference distribution (the dataset in offline RL or the base model in LLM RL finetuning) is essential to prevent value over-optimization caused by erroneous out-of-distribution extrapolation. Existing methods either rely on reparameterized policy gradient, which are difficult to scale to large generative models, or on reject sampling, which can be overly conservative when attempting to move beyond the behavior support. In this paper, we propose Value Gradient Flow (VGF), a scalable new paradigm for behavior-regularized RL. VGF casts behavior-regularized RL as an optimal transport problem that maps the reference distribution to the value-induced optimal policy distribution. We solve this transport problem via discrete gradient flow, where value gradients guide particles initialized from the reference distribution. Our analysis shows that VGF imposes regularization implicitly by controlling the transport budget. VGF eliminates explicit policy parameterization while remaining expressive and flexible, this enables adaptive test-time scaling by adjusting the transport budget. Extensive experiments demonstrate that VGF significantly outperforms prior methods, achieving state-of-the-art results on offline RL benchmarks (D4RL, OGBench) and LLM RL tasks. Code and runs can be found at https://ryanxhr.github.io/vgf.