Search papers, labs, and topics across Lattice.
This paper introduces a vision-based hierarchical control framework for bipedal robots that combines a reinforcement learning-based footstep planner with a low-level Operational Space Controller. The high-level planner uses a local elevation map to generate footstep commands, while the low-level controller tracks the resulting trajectories. The approach leverages an Angular Momentum Linear Inverted Pendulum model for efficient state representation and is validated through simulations and hardware experiments on the Cassie robot.
Reinforcement learning enables Cassie the bipedal robot to nimbly navigate complex terrains using only vision, bypassing brittle, hand-engineered visual pipelines.
Bipedal robots demonstrate potential in navigating challenging terrains through dynamic ground contact. However, current frameworks often depend solely on proprioception or use manually designed visual pipelines, which are fragile in real-world settings and complicate real-time footstep planning in unstructured environments. To address this problem, we present a vision-based hierarchical control framework that integrates a reinforcement learning high-level footstep planner, which generates footstep commands based on a local elevation map, with a low-level Operational Space Controller that tracks the generated trajectories. We utilize the Angular Momentum Linear Inverted Pendulum model to construct a low-dimensional state representation to capture an informative encoding of the dynamics while reducing complexity. We evaluate our method across different terrain conditions using the underactuated bipedal robot Cassie and investigate the capabilities and challenges of our approach through simulation and hardware experiments.