Search papers, labs, and topics across Lattice.
The paper introduces Distribution Contractive Reinforcement Learning (DICE-RL), a method to refine pre-trained generative robot policies by using RL as a distribution contraction operator. DICE-RL amplifies high-success behaviors from online feedback via a stable, sample-efficient residual off-policy RL framework that combines selective behavior regularization with value-guided action selection. Experiments demonstrate that DICE-RL reliably improves performance with strong stability and sample efficiency, enabling mastery of complex manipulation skills from pixel inputs in simulation and on a real robot.
Turn your robot's clumsy pre-trained behaviors into expert-level skills with DICE-RL, a surprisingly stable and efficient RL fine-tuning method.
We introduce Distribution Contractive Reinforcement Learning (DICE-RL), a framework that uses reinforcement learning (RL) as a"distribution contraction"operator to refine pretrained generative robot policies. DICE-RL turns a pretrained behavior prior into a high-performing"pro"policy by amplifying high-success behaviors from online feedback. We pretrain a diffusion- or flow-based policy for broad behavioral coverage, then finetune it with a stable, sample-efficient residual off-policy RL framework that combines selective behavior regularization with value-guided action selection. Extensive experiments and analyses show that DICE-RL reliably improves performance with strong stability and sample efficiency. It enables mastery of complex long-horizon manipulation skills directly from high-dimensional pixel inputs, both in simulation and on a real robot. Project website: https://zhanyisun.github.io/dice.rl.2026/.