Search papers, labs, and topics across Lattice.
The paper introduces Concept-Gated Visual Distillation (CGVD), a training-free inference framework, to address the "Precision-Reasoning Gap" in Vision-Language-Action (VLA) models caused by visual clutter. CGVD parses instructions into safe and distractor sets, refines targets using cross-validation and spatial disambiguation, and then uses Fourier-based inpainting to suppress semantic distractors while preserving spatial geometry. Experiments in cluttered manipulation tasks show that CGVD significantly outperforms state-of-the-art baselines, achieving a 77.5% success rate compared to the baseline's 43.0%.
A training-free visual distillation method boosts VLA model performance in cluttered environments by over 34%, proving that targeted noise reduction is more effective than brute-force scaling.
Vision-Language-Action (VLA) models demonstrate impressive zero-shot generalization but frequently suffer from a"Precision-Reasoning Gap"in cluttered environments. This failure is driven by background-induced feature dilution, where high-frequency semantic noise corrupts the geometric grounding required for precise manipulation. To bridge this gap, we propose Concept-Gated Visual Distillation (CGVD), a training-free, model-agnostic inference framework that stabilizes VLA policies. CGVD operates by parsing instructions into safe and distractor sets, utilizing a two-layer target refinement process--combining cross-validation and spatial disambiguation--to explicitly penalize false positives and isolate genuine manipulation targets. We then process the scene via Fourier-based inpainting, generating a clean observation that actively suppresses semantic distractors while preserving critical spatial geometry and visual proprioception. Extensive evaluations in highly cluttered manipulation tasks demonstrate that CGVD prevents performance collapse. In environments with dense semantic distractors, our method significantly outperforms state-of-the-art baselines, achieving a 77.5% success rate compared to the baseline's 43.0%. By enforcing strict attribute adherence, CGVD establishes inference-time visual distillation as a critical prerequisite for robust robotic manipulation in the clutter.