Search papers, labs, and topics across Lattice.
This paper introduces anisotropic Lipschitz-constrained policies (ALCP) to enforce task-specified compliance bounds for humanoid robots trained via reinforcement learning. ALCP maps a task-space stiffness upper bound to a state-dependent Lipschitz constraint on the policy Jacobian, enforced via a hinge-squared spectral-norm penalty during RL training. Experiments demonstrate that ALCP improves locomotion stability, impact robustness, and energy efficiency compared to existing methods.
Humanoid robots can now learn to walk with provably direction-dependent compliance, thanks to a new anisotropic Lipschitz constraint on RL policies.
Reinforcement learning (RL) has demonstrated substantial potential for humanoid bipedal locomotion and the control of complex motions. To cope with oscillations and impacts induced by environmental interactions, compliant control is widely regarded as an effective remedy. However, the model-free nature of RL makes it difficult to impose task-specified and quantitatively verifiable compliance objectives, and classical model-based stiffness designs are not directly applicable. Lipschitz-Constrained Policies (LCP), which regularize the local sensitivity of a policy via gradient penalties, have recently been used to smooth humanoid motions. Nevertheless, existing LCP-based methods typically employ a single scalar Lipschitz budget and lack an explicit connection to physically meaningful compliance specifications in real-world systems. In this study, we propose an anisotropic Lipschitz-constrained policy (ALCP) that maps a task-space stiffness upper bound to a state-dependent Lipschitz-style constraint on the policy Jacobian. The resulting constraint is enforced during RL training via a hinge-squared spectral-norm penalty, preserving physical interpretability while enabling direction-dependent compliance. Experiments on humanoid robots show that ALCP improves locomotion stability and impact robustness, while reducing oscillations and energy usage.