Search papers, labs, and topics across Lattice.
The paper introduces Retrieval-Augmented Affordance Prediction (RAAP), a framework that combines affordance retrieval with alignment-based learning to improve robot manipulation in novel environments. RAAP decouples contact localization and action direction prediction, using dense correspondence to transfer contact points and a retrieval-augmented alignment model with dual-weighted attention to predict action directions. Experiments on DROID and HOI4D show RAAP achieves strong generalization with limited training data, enabling zero-shot robotic manipulation in simulation and real-world settings.
Robots can now generalize to unseen objects and categories for manipulation tasks with only a few training examples, thanks to a novel retrieval-augmented affordance prediction framework.
Understanding object affordances is essential for enabling robots to perform purposeful and fine-grained interactions in diverse and unstructured environments. However, existing approaches either rely on retrieval, which is fragile due to sparsity and coverage gaps, or on large-scale models, which frequently mislocalize contact points and mispredict post-contact actions when applied to unseen categories, thereby hindering robust generalization. We introduce Retrieval-Augmented Affordance Prediction (RAAP), a framework that unifies affordance retrieval with alignment-based learning. By decoupling static contact localization and dynamic action direction, RAAP transfers contact points via dense correspondence and predicts action directions through a retrieval-augmented alignment model that consolidates multiple references with dual-weighted attention. Trained on compact subsets of DROID and HOI4D with as few as tens of samples per task, RAAP achieves consistent performance across unseen objects and categories, and enables zero-shot robotic manipulation in both simulation and the real world. Project website: https://github.com/SEU-VIPGroup/RAAP.