Search papers, labs, and topics across Lattice.
This paper introduces SignVLA, a novel gloss-free Vision-Language-Action framework that directly maps sign language gestures to robotic manipulation commands, bypassing intermediate gloss annotations. By employing geometric normalization, temporal smoothing, and lexical refinement, SignVLA transforms continuous finger-spelling gestures into coherent language commands for real-time robotic control. Experimental results demonstrate the framework's effectiveness in grounding sign-derived instructions into precise robotic actions, highlighting its potential for accessible and scalable embodied intelligence.
Forget clunky gloss annotations: SignVLA directly translates sign language gestures into robot actions, opening up intuitive human-robot interaction.
We present, to our knowledge, the first sign language-driven Vision-Language-Action (VLA) framework for intuitive and inclusive human-robot interaction. Unlike conventional approaches that rely on gloss annotations as intermediate supervision, the proposed system adopts a gloss-free paradigm and directly maps visual sign gestures to semantic instructions. This design reduces annotation cost and avoids the information loss introduced by gloss representations, enabling more natural and scalable multimodal interaction. In this work, we focus on a real-time alphabet-level finger-spelling interface that provides a robust and low-latency communication channel for robotic control. Compared with large-scale continuous sign language recognition, alphabet-level interaction offers improved reliability, interpretability, and deployment feasibility in safety-critical embodied environments. The proposed pipeline transforms continuous gesture streams into coherent language commands through geometric normalization, temporal smoothing, and lexical refinement, ensuring stable and consistent interaction. Furthermore, the framework is designed to support future integration of transformer-based gloss-free sign language models, enabling scalable word-level and sentence-level semantic understanding. Experimental results demonstrate the effectiveness of the proposed system in grounding sign-derived instructions into precise robotic actions under diverse interaction scenarios. These results highlight the potential of the framework to advance accessible, scalable, and multimodal embodied intelligence.