Search papers, labs, and topics across Lattice.
Simulation Distillation (SimDist) is introduced as a sim-to-real framework that pretrains a latent world model using structural priors from a simulator. This approach transfers reward and value models directly from simulation, providing dense planning signals and avoiding the need for value learning during real-world deployment. The method achieves rapid real-world adaptation through online planning and supervised dynamics finetuning, outperforming existing methods in data efficiency, stability, and final performance on manipulation and locomotion tasks.
Forget painstakingly tuning RL in the real world - SimDist lets you pre-train a world model in simulation and then rapidly adapt it via supervised learning, slashing data requirements and boosting performance.
Simulation-to-real transfer remains a central challenge in robotics, as mismatches between simulated and real-world dynamics often lead to failures. While reinforcement learning offers a principled mechanism for adaptation, existing sim-to-real finetuning methods struggle with exploration and long-horizon credit assignment in the low-data regimes typical of real-world robotics. We introduce Simulation Distillation (SimDist), a sim-to-real framework that distills structural priors from a simulator into a latent world model and enables rapid real-world adaptation via online planning and supervised dynamics finetuning. By transferring reward and value models directly from simulation, SimDist provides dense planning signals from raw perception without requiring value learning during deployment. As a result, real-world adaptation reduces to short-horizon system identification, avoiding long-horizon credit assignment and enabling fast, stable improvement. Across precise manipulation and quadruped locomotion tasks, SimDist substantially outperforms prior methods in data efficiency, stability, and final performance. Project website and code: https://sim-dist.github.io/