Search papers, labs, and topics across Lattice.
The paper introduces S-VAM, a novel video-action model for robot learning that achieves real-time inference and high-fidelity visual foresight by predicting coherent geometric and semantic representations in a single forward pass. This is enabled by a self-distillation strategy where lightweight decouplers learn to map noisy one-step features to vision foundation model (VFM) representations extracted from a diffusion model's multi-step generated videos. Experiments show S-VAM outperforms state-of-the-art methods in simulation and real-world manipulation tasks.
Ditch slow, multi-step video generation: S-VAM distills the structured generative priors of multi-step denoising into a single forward pass for real-time robot action prediction.
Video action models (VAMs) have emerged as a promising paradigm for robot learning, owing to their powerful visual foresight for complex manipulation tasks. However, current VAMs, typically relying on either slow multi-step video generation or noisy one-step feature extraction, cannot simultaneously guarantee real-time inference and high-fidelity foresight. To address this limitation, we propose S-VAM, a shortcut video-action model that foresees coherent geometric and semantic representations via a single forward pass. Serving as a stable blueprint, these foreseen representations significantly simplify the action prediction. To enable this efficient shortcut, we introduce a novel self-distillation strategy that condenses structured generative priors of multi-step denoising into one-step inference. Specifically, vision foundation model (VFM) representations extracted from the diffusion model's own multi-step generated videos provide teacher targets. Lightweight decouplers, as students, learn to directly map noisy one-step features to these targets. Extensive experiments in simulation and the real world demonstrate that our S-VAM outperforms state-of-the-art methods, enabling efficient and precise manipulation in complex environments. Our project page is https://haodong-yan.github.io/S-VAM/