Search papers, labs, and topics across Lattice.
This paper introduces a reinforcement learning (RL) based control framework for a rope-driven quadrupedal climbing robot, enabling stable locomotion on steep slopes. The RL policy, trained with a learned Tumble Stability Margin (TSM), controls leg movements to maintain balance under disturbances from rope tension and varying slopes. The framework also incorporates dynamic body height adaptation for recovery from instability, demonstrating successful sim-to-real transfer and robust performance across diverse environments.
A quadrupedal robot masters rope-assisted climbing on steep slopes by learning to anticipate and recover from instability, even with partial information.
This study presents a novel control framework for climbing robots that utilizes both rope and leg mechanisms. The proposed robot ascends steep slopes using two ropes while maintaining its balance and adapting its pose to uneven surfaces through its four legs. The robot's overall movement on the slopes is managed by an ascender module, while leg motions are governed by a reinforcement learning (RL) policy trained to sustain local stability under unpredictable disturbances from rope tensions and varying slopes. To enhance stability under partial observability, the policy integrates a latent context vector with a learned Tumble Stability Margin (TSM) for proactive instability detection. Furthermore, to recover from instability in challenging conditions such as slipping or edge hooking, the framework enables dynamic body height adaptation based on stability feedback. Validated via sim-to-real transfer, the system demonstrates that the rope-driven climbing robot maintains consistent locomotion stability across various slope environments and effectively responds to hazardous situations using its learned stability awareness.