Search papers, labs, and topics across Lattice.
This paper introduces Scene Graph-Chain-of-Thought (SG-CoT), a framework that leverages scene graph representations to mitigate ambiguity in LLM-based robotic planners. SG-CoT allows LLMs to iteratively query the scene graph to detect and resolve ambiguities in instructions, improving grounding and reliability. Experiments show SG-CoT outperforms existing methods, achieving at least a 10% improvement in question accuracy and a 4-15% increase in success rates across single and multi-agent robotic tasks.
Scene graphs plus LLMs let robots ask clarifying questions, boosting multi-agent task success by 15%.
Ambiguity poses a major challenge to large language models (LLMs) used as robotic planners. In this letter, we present Scene Graph-Chain-of-Thought (SG-CoT), a two-stage framework where LLMs iteratively query a scene graph representation of the environment to detect and clarify ambiguities. First, a structured scene graph representation of the environment is constructed from input observations, capturing objects, their attributes, and relationships with other objects. Second, the LLM is equipped with retrieval functions to query portions of the scene graph that are relevant to the provided instruction. This grounds the reasoning process of the LLM in the observation, increasing the reliability of robotic planners under ambiguous situations. SG-CoT also allows the LLM to identify the source of ambiguity and pose a relevant disambiguation question to the user or another robot. Extensive experimentation demonstrates that SG-CoT consistently outperforms prior methods, with a minimum of 10% improvement in question accuracy and a minimum success rate increase of 4% in single-agent and 15% in multi-agent environments, validating its effectiveness for more generalizable robot planning.