Search papers, labs, and topics across Lattice.
This paper addresses the limitations of domain randomization in sim2real transfer for robotics by enabling safe, continual adaptation of RL policies after deployment. They achieve this by combining safe RL techniques with continual learning within the domain-randomized simulation, allowing the policy to adapt to real-world dynamics while minimizing safety risks. Experimental results demonstrate the method's ability to adapt to the real system's domain distribution and environment dynamics, avoiding catastrophic forgetting of the pre-trained policy.
Achieve safe and efficient real-world robot control by continually adapting policies trained in simulation, overcoming the limitations of fixed policies and wide randomization ranges.
Domain randomization has emerged as a fundamental technique in reinforcement learning (RL) to facilitate the transfer of policies from simulation to real-world robotic applications. Many existing domain randomization approaches have been proposed to improve robustness and sim2real transfer. These approaches rely on wide randomization ranges to compensate for the unknown actual system parameters, leading to robust but inefficient real-world policies. In addition, the policies pretrained in the domain-randomized simulation are fixed after deployment due to the inherent instability of the optimization processes based on RL and the necessity of sampling exploitative but potentially unsafe actions on the real system. This limits the adaptability of the deployed policy to the inevitably changing system parameters or environment dynamics over time. We leverage safe RL and continual learning under domain-randomized simulation to address these limitations and enable safe deployment-time policy adaptation in real-world robot control. The experiments show that our method enables the policy to adapt and fit to the current domain distribution and environment dynamics of the real system while minimizing safety risks and avoiding issues like catastrophic forgetting of the general policy found in randomized simulation during the pretraining phase. Videos and supplementary material are available at https://safe-cda.github.io/.