Search papers, labs, and topics across Lattice.
This paper introduces a multimodal reinforcement learning framework, DreamWaQ++, that fuses proprioceptive and exteroceptive information to enable robust quadrupedal locomotion in complex environments. The approach addresses the limitations of proprioception-only methods, which struggle with collision avoidance, and exteroception-only methods, which require precise environmental maps. The resulting controller demonstrates agile locomotion on a quadrupedal robot across diverse real-world terrains, including rough terrain, slopes, and stairs, while exhibiting robustness to out-of-distribution scenarios.
Quadrupedal robots can now nimbly navigate stairs and rough terrain thanks to a new multimodal RL approach that doesn't require feeling around with its front feet.
Quadrupedal robots hold promising potential for applications in navigating cluttered environments with resilience akin to their animal counterparts. However, their floating-base configuration makes them susceptible to real-world uncertainties, presenting substantial challenges in locomotion control. Deep reinforcement learning has emerged as a viable alternative for developing robust locomotion controllers. However, approaches relying solely on proprioception often sacrifice collision-free locomotion, as they require front-foot contact to detect stairs and adapt the gait. Meanwhile, incorporating exteroception necessitates a precisely modeled map observed by exteroceptive sensors over time. This work proposes a novel method for fusing proprioception and exteroception through a resilient multimodal reinforcement learning framework. The proposed method yields a controller demonstrating agile locomotion on a quadrupedal robot across diverse real-world courses, including rough terrains, steep slopes, and high-rise stairs, while maintaining robustness in out-of-distribution situations.