Search papers, labs, and topics across Lattice.
MMaDA-VLA is introduced as a native discrete diffusion Vision-Language-Action model that unifies multi-modal understanding and action generation. It embeds language, images, and robot controls into a discrete token space and uses masked token denoising to generate future goal observations and action chunks in parallel. The iterative denoising process improves long-horizon consistency by grounding actions in predicted future visual outcomes, achieving state-of-the-art performance on LIBERO (98.0% success) and CALVIN (4.78 average length).
Ditch the clunky architectures: a single diffusion model can now handle vision, language, and robot control to achieve SOTA manipulation performance.
Vision-Language-Action (VLA) models aim to control robots for manipulation from visual observations and natural-language instructions. However, existing hierarchical and autoregressive paradigms often introduce architectural overhead, suffer from temporal inconsistency and long-horizon error accumulation, and lack a mechanism to capture environment dynamics without extra modules. To this end, we present MMaDA-VLA, a fully native pre-trained large diffusion VLA model that unifies multi-modal understanding and generation in a single framework. Our key idea is a native discrete diffusion formulation that embeds language, images, and continuous robot controls into one discrete token space and trains a single backbone with masked token denoising to jointly generate a future goal observation and an action chunk in parallel. Iterative denoising enables global, order-free refinement, improving long-horizon consistency while grounding actions in predicted future visual outcomes without auxiliary world models. Experiments across simulation benchmarks and real-world tasks show state-of-the-art performance, achieving 98.0% average success on LIBERO and 4.78 average length on CALVIN.