Search papers, labs, and topics across Lattice.
MaskAdapt introduces a two-stage residual learning framework for physics-based humanoid control, enabling flexible motion adaptation. First, a mask-invariant base policy is trained with stochastic body-part masking and action regularization to create a robust motion prior. Then, a residual policy adapts specific body parts while preserving original behaviors, demonstrating versatility in motion composition and text-driven partial goal tracking.
Achieve targeted motion adaptation in physics-based characters by learning a mask-invariant prior, enabling robust control even with missing observations or text-driven partial goals.
We present MaskAdapt, a framework for flexible motion adaptation in physics-based humanoid control. The framework follows a two-stage residual learning paradigm. In the first stage, we train a mask-invariant base policy using stochastic body-part masking and a regularization term that enforces consistent action distributions across masking conditions. This yields a robust motion prior that remains stable under missing observations, anticipating later adaptation in those regions. In the second stage, a residual policy is trained atop the frozen base controller to modify only the targeted body parts while preserving the original behaviors elsewhere. We demonstrate the versatility of this design through two applications: (i) motion composition, where varying masks enable multi-part adaptation within a single sequence, and (ii) text-driven partial goal tracking, where designated body parts follow kinematic targets provided by a pre-trained text-conditioned autoregressive motion generator. Through experiments, MaskAdapt demonstrates strong robustness and adaptability, producing diverse behaviors under masked observations and delivering superior targeted motion adaptation compared to prior work.