Search papers, labs, and topics across Lattice.
The paper introduces Referring-Aware Visuomotor Policy (ReV), a closed-loop visuomotor policy learning framework that enhances robustness to out-of-distribution errors in robotic manipulation by incorporating sparse referring points. ReV uses coupled diffusion heads to generate globally consistent action anchors and adaptively interpolate trajectories based on the referring point's temporal position. Trained with targeted perturbations on expert demonstrations, ReV achieves higher success rates in simulated and real-world tasks without additional data or fine-tuning.
Robots can now adapt to unforeseen errors and dynamically replan trajectories in real-time by simply incorporating sparse, human-provided or planner-provided "referring points" into their visuomotor policies.
This paper addresses a fundamental problem of visuomotor policy learning for robotic manipulation: how to enhance robustness in out-of-distribution execution errors or dynamically re-routing trajectories, where the model relies solely on the original expert demonstrations for training. We introduce the Referring-Aware Visuomotor Policy (ReV), a closed-loop framework that can adapt to unforeseen circumstances by instantly incorporating sparse referring points provided by a human or a high-level reasoning planner. Specifically, ReV leverages the coupled diffusion heads to preserve standard task execution patterns while seamlessly integrating sparse referring via a trajectory-steering strategy. Upon receiving a specific referring point, the global diffusion head firstly generates a sequence of globally consistent yet temporally sparse action anchors, while identifies the precise temporal position for the referring point within this sequence. Subsequently, the local diffusion head adaptively interpolates adjacent anchors based on the current temporal position for specific tasks. This closed-loop process repeats at every execution step, enabling real-time trajectory replanning in response to dynamic changes in the scene. In practice, rather than relying on elaborate annotations, ReV is trained only by applying targeted perturbations to expert demonstrations. Without any additional data or fine-tuning scheme, ReV achieve higher success rates across challenging simulated and real-world tasks.