Search papers, labs, and topics across Lattice.
The paper introduces VAMOS, a hierarchical Vision-Language-Action (VLA) model for robot navigation that separates semantic planning from embodiment grounding. This is achieved by training a generalist planner on diverse data and a specialist affordance model in simulation to learn robot-specific physical constraints. The key result is that VAMOS achieves higher success rates in real-world indoor and outdoor navigation compared to state-of-the-art methods, demonstrating effective cross-embodiment navigation and improved reliability through rejection of infeasible plans.
Robots can now navigate more reliably and across different bodies (wheeled vs. legged) thanks to a hierarchical model that separates high-level planning from low-level physical constraints.
A fundamental challenge in robot navigation lies in learning policies that generalize across diverse environments while conforming to the unique physical constraints and capabilities of a specific embodiment (e.g., quadrupeds can walk up stairs, but rovers cannot). We propose VAMOS, a hierarchical VLA that decouples semantic planning from embodiment grounding: a generalist planner learns from diverse, open-world data, while a specialist affordance model learns the robot's physical constraints and capabilities in safe, low-cost simulation. We enabled this separation by carefully designing an interface that lets a high-level planner propose candidate paths directly in image space that the affordance model then evaluates and re-ranks. Our real-world experiments show that VAMOS achieves higher success rates in both indoor and complex outdoor navigation than state-of-the-art model-based and end-to-end learning methods. We also show that our hierarchical design enables cross-embodied navigation across legged and wheeled robots and is easily steerable using natural language. Real-world ablations confirm that the specialist model is key to embodiment grounding, enabling a single high-level planner to be deployed across physically distinct wheeled and legged robots. Finally, this model significantly enhances single-robot reliability, achieving 3X higher success rates by rejecting physically infeasible plans. Website: https://vamos-vla.github.io/