Search papers, labs, and topics across Lattice.
The paper introduces CEBench, a new benchmark for Vision-Language-Action (VLA) models that includes diverse embodiments in simulation and the real world, along with 14.4k simulated and 1.6k real-world expert trajectories. Using CEBench, the authors identify key factors impacting VLA practicality and propose LLaVA-VLA, a lightweight VLA model designed for consumer-grade GPUs. LLaVA-VLA, which integrates a compact VLM backbone with multi-view perception, proprioceptive tokenization, and action chunking, achieves strong generalization and versatility across embodiments, demonstrating end-to-end mobile manipulation capabilities in real-world experiments.
A practical VLA model, LLaVA-VLA, achieves strong generalization and versatility on a new benchmark, CEBench, while running on consumer-grade GPUs, eliminating the need for costly pre-training.
Vision-Language-Action (VLA) models have emerged as a generalist robotic agent. However, existing VLAs are hindered by excessive parameter scales, prohibitive pre-training requirements, and limited applicability to diverse embodiments. To improve the practicality of VLAs, we propose a comprehensive benchmark and an improved baseline. First, we propose CEBench, a new benchmark spanning diverse embodiments in both simulation and the real world with consideration of domain randomization. We collect 14.4k simulated trajectories and 1.6k real-world expert-curated trajectories to support training on CEBench. Second, using CEBench as our testbed, we study three critical aspects of VLAs'practicality and offer several key findings. Informed by these findings, we introduce LLaVA-VLA, a lightweight yet powerful VLA designed for practical deployment on consumer-grade GPUs. Architecturally, it integrates a compact VLM backbone with multi-view perception, proprioceptive tokenization, and action chunking. To eliminate reliance on costly pre-training, LLaVA-VLA adopts a two-stage training paradigm including post-training and fine-tuning. Furthermore, LLaVA-VLA extends the action space to unify navigation and manipulation. Experiments across embodiments demonstrate the capabilities of generalization and versatility of LLaVA-VLA , while real-world mobile manipulation experiments establish it as the first end-to-end VLA model for mobile manipulation. We will open-source all datasets, codes, and checkpoints upon acceptance to foster reproducibility and future research.