Search papers, labs, and topics across Lattice.
This paper explores using Normalizing Flows (NF) as a policy parameterization in deep reinforcement learning for robotics, addressing the limitations of unimodal Gaussian policies. The authors identify and analyze training instability issues when directly applying NF in online RL. They then introduce a simple stabilization technique, NFPO, that enables robust and high-performing policy learning.
Normalizing Flows can now be used to train robust robotic policies in online reinforcement learning, thanks to a simple stabilization technique that overcomes previous instability issues.
Deep Reinforcement Learning (DRL) has experienced significant advancements in recent years and has been widely used in many fields. In DRL-based robotic policy learning, however, current de facto policy parameterization is still multivariate Gaussian (with diagonal covariance matrix), which lacks the ability to model multi-modal distribution. In this work, we explore the adoption of a modern network architecture, i.e. Normalizing Flow (NF) as the policy parameterization for its ability of multi-modal modeling, closed form of log probability and low computation and memory overhead. However, naively training NF in online Reinforcement Learning (RL) usually leads to training instability. We provide a detailed analysis for this phenomenon and successfully address it via simple but effective technique. With extensive experiments in multiple simulation environments, we show our method, NFPO could obtain robust and strong performance in widely used robotic learning tasks and successfully transfer into real-world robots.