Search papers, labs, and topics across Lattice.
The paper introduces CF-VLA, a coarse-to-fine approach for vision-language-action (VLA) policies that improves the efficiency-quality trade-off in action generation. CF-VLA uses a two-stage process: first, a coarse stage predicts endpoint velocity to create a structured initialization, and then a fine stage refines this initialization in a single step. Experiments on CALVIN and LIBERO demonstrate that CF-VLA achieves superior performance and efficiency compared to existing methods, including a 75.4% reduction in action sampling latency and a new state-of-the-art 83.0% real-robot success rate.
Forget slow, multi-step action generation: CF-VLA's coarse-to-fine approach slashes latency by 75% while boosting real-robot success rates to a new high of 83%.
Flow-based vision-language-action (VLA) policies offer strong expressivity for action generation, but suffer from a fundamental inefficiency: multi-step inference is required to recover action structure from uninformative Gaussian noise, leading to a poor efficiency-quality trade-off under real-time constraints. We address this issue by rethinking the role of the starting point in generative action modeling. Instead of shortening the sampling trajectory, we propose CF-VLA, a coarse-to-fine two-stage formulation that restructures action generation into a coarse initialization step that constructs an action-aware starting point, followed by a single-step local refinement that corrects residual errors. Concretely, the coarse stage learns a conditional posterior over endpoint velocity to transform Gaussian noise into a structured initialization, while the fine stage performs a fixed-time refinement from this initialization. To stabilize training, we introduce a stepwise strategy that first learns a controlled coarse predictor and then performs joint optimization. Experiments on CALVIN and LIBERO show that our method establishes a strong efficiency-performance frontier under low-NFE (Number of Function Evaluations) regimes: it consistently outperforms existing NFE=2 methods, matches or surpasses the NFE=10 $\pi_{0.5}$ baseline on several metrics, reduces action sampling latency by 75.4\%, and achieves the best average real-robot success rate of 83.0\%, outperforming MIP by 19.5 points and $\pi_{0.5}$ by 4.0 points. These results suggest that structured, coarse-to-fine generation enables both strong performance and efficient inference. Our code is available at https://github.com/EmbodiedAI-RoboTron/CF-VLA.