Search papers, labs, and topics across Lattice.
The paper introduces $M^2$-VLA, a novel approach to robotic manipulation that leverages pre-trained Vision-Language Models (VLMs) as backbones without end-to-end fine-tuning. It addresses the challenge of bridging the gap between VLMs' high-level understanding and robotic control via a Mixture of Layers (MoL) strategy for extracting task-critical information. The method also incorporates a Meta Skill Module (MSM) to facilitate efficient trajectory learning, achieving strong performance in both simulated and real-world robotic manipulation tasks while preserving VLM generalization.
Forget end-to-end fine-tuning: $M^2$-VLA unlocks the power of generalized VLMs for robotic manipulation by intelligently mixing layers and incorporating meta-skills.
Current Vision-Language-Action (VLA) models predominantly rely on end-to-end fine-tuning. While effective, this paradigm compromises the inherent generalization capabilities of Vision-Language Models (VLMs) and incurs catastrophic forgetting. To address these limitations, we propose $M^2$-VLA, which demonstrates that a generalized VLM is able to serve as a powerful backbone for robotic manipulation directly. However, it remains a key challenge to bridge the gap between the high-level semantic understanding of VLMs and the precise requirements of robotic control. To overcome this, we introduce the Mixture of Layers (MoL) strategy that selectively extracts task-critical information from dense semantic features. Furthermore, to facilitate efficient trajectory learning under constrained model capacity, we propose a Meta Skill Module (MSM) that integrates strong inductive biases. Extensive experiments in both simulated and real-world environments demonstrate the effectiveness of our approach. Furthermore, generalization and ablation studies validate the architecture's zero-shot capabilities and confirm the contribution of each key component. Our code and pre-trained models will be made publicly available.