Search papers, labs, and topics across Lattice.
The paper introduces Re4MPC, a multi-model motion planning pipeline that uses Deep Reinforcement Learning (DRL) to reactively select the model, cost, and constraints of a Nonlinear Model Predictive Control (NMPC) problem for efficient trajectory generation in robots with many degrees of freedom. By learning a policy for reactive decision-making, Re4MPC reduces the computational cost associated with traditional NMPC approaches. Experimental results on a simulated mobile manipulator demonstrate that Re4MPC achieves higher success rates and is more computationally efficient compared to an NMPC baseline.
Achieve faster and more reliable robot motion planning by reactively switching between simplified models within a Nonlinear Model Predictive Control framework, guided by deep reinforcement learning.
Traditional motion planning methods for robots with many degrees-of-freedom, such as mobile manipulators, are often computationally prohibitive for real-world settings. In this paper, we propose a novel multi-model motion planning pipeline, termed Re4MPC, which computes trajectories using Nonlinear Model Predictive Control (NMPC). Re4MPC generates trajectories in a computationally efficient manner by reactively selecting the model, cost, and constraints of the NMPC problem depending on the complexity of the task and robot state. The policy for this reactive decision-making is learned via a Deep Reinforcement Learning (DRL) framework. We introduce a mathematical formulation to integrate NMPC into this DRL framework. To validate our methodology and design choices, we evaluate DRL training and test outcomes in a physics-based simulation involving a mobile manipulator. Experimental results demonstrate that Re4MPC is more computationally efficient and achieves higher success rates in reaching end-effector goals than the NMPC baseline, which computes whole-body trajectories without our learning mechanism.