Search papers, labs, and topics across Lattice.
This paper explores test-time verification as a method to improve vision-language-action (VLA) alignment, addressing the "intention-action gap" in embodied instruction following. They demonstrate that scaling both rephrased instructions and generated actions at test time enhances sample diversity and improves action selection. The authors introduce CoVer, a contrastive verifier, and a hierarchical verification inference pipeline, showing that this verification approach outperforms scaling policy pre-training on the SIMPLER and PolaRiS benchmarks.
Verification at test time can be a surprisingly effective alternative to scaling policy learning for vision-language-action alignment, yielding substantial gains in both simulated and real-world robotic tasks.
The long-standing vision of general-purpose robots hinges on their ability to understand and act upon natural language instructions. Vision-Language-Action (VLA) models have made remarkable progress toward this goal, yet their generated actions can still misalign with the given instructions. In this paper, we investigate test-time verification as a means to shrink the"intention-action gap.''We first characterize the test-time scaling law for embodied instruction following and demonstrate that jointly scaling the number of rephrased instructions and generated actions greatly increases test-time sample diversity, often recovering correct actions more efficiently than scaling each dimension independently. To capitalize on these scaling laws, we present CoVer, a contrastive verifier for vision-language-action alignment, and show that our architecture scales gracefully with additional computational resources and data. We then introduce"boot-time compute"and a hierarchical verification inference pipeline for VLAs. At deployment, our framework precomputes a diverse set of rephrased instructions from a Vision-Language-Model (VLM), repeatedly generates action candidates for each instruction, and then uses a verifier to select the optimal high-level prompt and low-level action chunks. Compared to scaling policy pre-training on the same data, our verification approach yields 22% gains in-distribution and 13% out-of-distribution on the SIMPLER benchmark, with a further 45% improvement in real-world experiments. On the PolaRiS benchmark, CoVer achieves 14% gains in task progress and 9% in success rate.