Search papers, labs, and topics across Lattice.
FutureVLA introduces a joint visuomotor predictive architecture for vision-language-action models that decouples visual and motor information during pretraining using a Joint Visuomotor Gating mechanism to learn generalized physical priors. This allows the motor stream to focus on continuous physical dynamics while querying visual tokens for environmental constraints. A latent embeddings alignment strategy then enables diverse downstream VLA models to internalize these temporal priors without architectural changes, leading to consistent performance improvements.
By decoupling visual and motor information during pretraining, FutureVLA unlocks more effective visuomotor prediction for vision-language-action models, boosting performance without modifying downstream architectures.
Predictive foresight is important to intelligent embodied agents. Since the motor execution of a robot is intrinsically constrained by its visual perception of environmental geometry, effectively anticipating the future requires capturing this tightly coupled visuomotor interplay. While recent vision-language-action models attempt to incorporate future guidance, they struggle with this joint modeling. Existing explicit methods divert capacity to task-irrelevant visual details, whereas implicit methods relying on sparse frame pairs disrupt temporal continuity. By heavily relying on visual reconstruction, these methods become visually dominated, entangling static scene context with dynamic action intent. We argue that effective joint visuomotor predictive modeling requires both temporal continuity and visually-conditioned supervision decoupling. To this end, we propose FutureVLA, featuring a novel Joint Visuomotor Predictive Architecture. FutureVLA is designed to extract joint visuomotor embeddings by first decoupling visual and motor information, and then jointly encoding generalized physical priors. Specifically, in the pretraining stage, we leverage heterogeneous manipulation datasets and introduce a Joint Visuomotor Gating mechanism to structurally separate visual state preservation from temporal action modeling. It allows the motor stream to focus on continuous physical dynamics while explicitly querying visual tokens for environmental constraints, yielding highly generalizable joint visuomotor embeddings. Subsequently, in the post-training stage, we employ a latent embeddings alignment strategy, enabling diverse downstream VLA models to internalize these temporal priors without modifying their inference architectures. Extensive experiments demonstrate that FutureVLA consistently improves VLA frameworks.