Search papers, labs, and topics across Lattice.
The paper introduces World4RL, a framework that uses diffusion-based world models as simulators to refine pre-trained robotic manipulation policies in imagined environments. World4RL pre-trains a diffusion world model on multi-task datasets to capture diverse dynamics and then refines policies within this frozen world model, avoiding real-world interactions. Experiments show that World4RL achieves higher success rates compared to imitation learning and other baselines due to its high-fidelity environment modeling and consistent policy refinement.
Imagine training robots to manipulate objects in the real world, but entirely within a high-fidelity, diffusion-based dream.
Robotic manipulation policies are commonly initialized through imitation learning, but their performance is limited by the scarcity and narrow coverage of expert data. Reinforcement learning can refine polices to alleviate this limitation, yet real-robot training is costly and unsafe, while training in simulators suffers from the sim-to-real gap. Recent advances in generative models have demonstrated remarkable capabilities in real-world simulation, with diffusion models in particular excelling at generation. This raises the question of how diffusion model-based world models can be combined to enhance pre-trained policies in robotic manipulation. In this work, we propose World4RL, a framework that employs diffusion-based world models as high-fidelity simulators to refine pre-trained policies entirely in imagined environments for robotic manipulation. Unlike prior works that primarily employ world models for planning, our framework enables direct end-to-end policy optimization. World4RL is designed around two principles: pre-training a diffusion world model that captures diverse dynamics on multi-task datasets and refining policies entirely within a frozen world model to avoid online real-world interactions. We further design a two-hot action encoding scheme tailored for robotic manipulation and adopt diffusion backbones to improve modeling fidelity. Extensive simulation and real-world experiments demonstrate that World4RL provides high-fidelity environment modeling and enables consistent policy refinement, yielding significantly higher success rates compared to imitation learning and other baselines. More visualization results are available at https://world4rl.github.io/.