Search papers, labs, and topics across Lattice.
WalkGPT, a novel pixel-grounded LVLM, is introduced to address the challenge of providing accessible pedestrian navigation by unifying language reasoning and segmentation for depth-aware accessibility guidance. The model incorporates a Multi-Scale Query Projector (MSQP) to aggregate image tokens along text tokens across spatial hierarchies and a Calibrated Text Projector (CTP), guided by a Region Alignment Loss, to map language embeddings into segmentation-aware representations. Evaluated on the newly introduced PAVE benchmark of 41k pedestrian-view images, WalkGPT demonstrates strong grounded reasoning and segmentation performance, generating conversational responses with segmentation masks delineating accessible and harmful features along with relative depth estimation.
LVLMs can now provide depth-aware pedestrian navigation guidance by grounding language reasoning and segmentation, without needing user-provided cues or anchor points.
Ensuring accessible pedestrian navigation requires reasoning about both semantic and spatial aspects of complex urban scenes, a challenge that existing Large Vision-Language Models (LVLMs) struggle to meet. Although these models can describe visual content, their lack of explicit grounding leads to object hallucinations and unreliable depth reasoning, limiting their usefulness for accessibility guidance. We introduce WalkGPT, a pixel-grounded LVLM for the new task of Grounded Navigation Guide, unifying language reasoning and segmentation within a single architecture for depth-aware accessibility guidance. Given a pedestrian-view image and a navigation query, WalkGPT generates a conversational response with segmentation masks that delineate accessible and harmful features, along with relative depth estimation. The model incorporates a Multi-Scale Query Projector (MSQP) that shapes the final image tokens by aggregating them along text tokens across spatial hierarchies, and a Calibrated Text Projector (CTP), guided by a proposed Region Alignment Loss, that maps language embeddings into segmentation-aware representations. These components enable fine-grained grounding and depth inference without user-provided cues or anchor points, allowing the model to generate complete and realistic navigation guidance. We also introduce PAVE, a large-scale benchmark of 41k pedestrian-view images paired with accessibility-aware questions and depth-grounded answers. Experiments show that WalkGPT achieves strong grounded reasoning and segmentation performance. The source code and dataset are available on the \href{https://sites.google.com/view/walkgpt-26/home}{project website}.