Search papers, labs, and topics across Lattice.
This paper introduces a lifelong imitation learning framework that uses a multimodal latent replay buffer to store compact representations of visual, linguistic, and robot state information for continual policy refinement. To stabilize adaptation across sequential tasks, they propose an incremental feature adjustment mechanism that regularizes task embeddings using an angular margin constraint. Experiments on the LIBERO benchmark demonstrate state-of-the-art performance, achieving 10-17 point gains in AUC and up to 65% less forgetting compared to existing methods.
Forget catastrophic forgetting: this imitation learning framework remembers up to 65% more while improving AUC by 10-17 points on the LIBERO benchmark.
We introduce a lifelong imitation learning framework that enables continual policy refinement across sequential tasks under realistic memory and data constraints. Our approach departs from conventional experience replay by operating entirely in a multimodal latent space, where compact representations of visual, linguistic, and robot's state information are stored and reused to support future learning. To further stabilize adaptation, we introduce an incremental feature adjustment mechanism that regularizes the evolution of task embeddings through an angular margin constraint, preserving inter-task distinctiveness. Our method establishes a new state of the art in the LIBERO benchmarks, achieving 10-17 point gains in AUC and up to 65% less forgetting compared to previous leading methods. Ablation studies confirm the effectiveness of each component, showing consistent gains over alternative strategies. The code is available at: https://github.com/yfqi/lifelong_mlr_ifa.