Search papers, labs, and topics across Lattice.
This paper introduces MORLAX, a GPU-native MORL algorithm, and MO-Playground, a collection of GPU-accelerated multi-objective environments, to address the computational bottleneck in MORL for robotics. MORLAX leverages massively parallel simulation to achieve 25-270x speedups compared to CPU-based methods while improving Pareto front hypervolume. The approach is validated on a custom BRUCE humanoid robot environment, learning Pareto-optimal locomotion policies across six objectives.
Forget waiting hours: this MORL framework achieves 270x speedups on robotics tasks thanks to GPU-native parallelization.
Multi-objective reinforcement learning (MORL) is a powerful tool to learn Pareto-optimal policy families across conflicting objectives. However, unlike traditional RL algorithms, existing MORL algorithms do not effectively leverage large-scale parallelization to concurrently simulate thousands of environments, resulting in vastly increased computation time. Ultimately, this has limited MORL's application towards complex multi-objective robotics problems. To address these challenges, we present 1) MORLAX, a new GPU-native, fast MORL algorithm, and 2) MO-Playground, a pip-installable playground of GPU-accelerated multi-objective environments. Together, MORLAX and MO-Playground approximate Pareto sets within minutes, offering 25-270x speed-ups compared to legacy CPU-based approaches whilst achieving superior Pareto front hypervolumes. We demonstrate the versatility of our approach by implementing a custom BRUCE humanoid robot environment using MO-Playground and learning Pareto-optimal locomotion policies across 6 realistic objectives for BRUCE, such as smoothness, efficiency and arm swinging.