Search papers, labs, and topics across Lattice.
The paper introduces STAR, a two-stage framework to mitigate cascading errors in LLMs for spatial reasoning, using topological anchors and a new RedMaze-23K dataset with turnpoint annotations. The first stage employs supervised fine-tuning for spatial semantics and path pruning, while the second uses Spatial-aware Segment-level Direct Preference Optimization (SDPO) for self-correction in long-horizon navigation. STAR achieves state-of-the-art performance among open-source models, outperforming DeepSeek-V3 and reaching 82.4% of GPT-4's performance.
Open-source models can now rival GPT-4 in spatial reasoning, thanks to a novel two-stage training framework that grounds LLMs in topological anchors.
Structured spatial navigation is a core benchmark for Large Language Models (LLMs) spatial reasoning. Existing paradigms like Visualization-of-Thought (VoT) are prone to cascading errors in complex topologies. To solve this, we propose STAR, a two-stage framework grounded on topological anchors, and introduce the RedMaze-23K dataset with human-inspired turnpoint annotations. The first stage uses supervised fine-tuning to help models internalize spatial semantics and prune redundant paths. The second adopts Spatial-aware Segment-level Direct Preference Optimization (SDPO) to refine self-correction in long-horizon navigation. Experiments show STAR achieves state-of-the-art performance among open-source models: its 32B variant outperforms DeepSeek-V3 (29.27% vs. 25.00%) and reaches 82.4% of GPT-4's performance.