Search papers, labs, and topics across Lattice.
The paper introduces AIDE, a dual-stream framework for robots to execute ambiguous instructions in unfamiliar environments by interactively identifying task-relevant objects. AIDE uses Multi-Stage Inference (MSI) for decision-making and Accelerated Decision-Making (ADM) for execution, enabling zero-shot affordance analysis and instruction interpretation. Experiments demonstrate AIDE achieves over 80% task planning success and 95% accuracy in closed-loop execution at 10 Hz, surpassing existing VLM-based methods.
Robots can now understand and act on ambiguous instructions like "I'm thirsty" in real-time, thanks to a new framework that combines visual reasoning with interactive exploration.
Enabling robots to explore and act in unfamiliar environments under ambiguous human instructions by interactively identifying task-relevant objects (e.g., identifying cups or beverages for"I'm thirsty") remains challenging for existing vision-language model (VLM)-based methods. This challenge stems from inefficient reasoning and the lack of environmental interaction, which hinder real-time task planning and execution. To address this, We propose Affordance-Aware Interactive Decision-Making and Execution for Ambiguous Instructions (AIDE), a dual-stream framework that integrates interactive exploration with vision-language reasoning, where Multi-Stage Inference (MSI) serves as the decision-making stream and Accelerated Decision-Making (ADM) as the execution stream, enabling zero-shot affordance analysis and interpretation of ambiguous instructions. Extensive experiments in simulation and real-world environments show that AIDE achieves the task planning success rate of over 80\% and more than 95\% accuracy in closed-loop continuous execution at 10 Hz, outperforming existing VLM-based methods in diverse open-world scenarios.