Search papers, labs, and topics across Lattice.
This paper introduces a hierarchical control architecture for legged and hybrid locomotion, combining a high-level RL agent for gait selection and navigation with a low-level MPC for trajectory tracking. The RL agent learns acyclic gaits, relieving the MPC of contact timing optimization. The approach achieves zero-shot sim-to-sim and sim-to-real transfer on various platforms, including the Centauro robot, without domain randomization.
Unlock zero-shot sim-to-real transfer for complex legged robots by offloading gait selection to a learned policy that guides a lower-level MPC.
We propose a contact-explicit hierarchical architecture coupling Reinforcement Learning (RL) and Model Predictive Control (MPC), where a high-level RL agent provides gait and navigation commands to a low-level locomotion MPC. This offloads the combinatorial burden of contact timing from the MPC by learning acyclic gaits through trial and error in simulation. We show that only a minimal set of rewards and limited tuning are required to obtain effective policies. We validate the architecture in simulation across robotic platforms spanning 50 kg to 120 kg and different MPC implementations, observing the emergence of acyclic gaits and timing adaptations in flat-terrain legged and hybrid locomotion, and further demonstrating extensibility to non-flat terrains. Across all platforms, we achieve zero-shot sim-to-sim transfer without domain randomization, and we further demonstrate zero-shot sim-to-real transfer without domain randomization on Centauro, our 120 kg wheeled-legged humanoid robot. We make our software framework and evaluation results publicly available at https://github.com/AndrePatri/AugMPC.