Search papers, labs, and topics across Lattice.
The paper introduces Vision-Sound-Language-Action (VSLA) as a continuous control paradigm for robotic manipulation, addressing the limitations of existing Vision-Language-Action (VLA) models that treat sound as static prompts. To realize VSLA, they propose HEAR, a framework comprising a streaming Historizer, an Envisioner, an Advancer (audio world model), and a flow-matching Realizer policy. They also contribute OpenX-Sound for pretraining and HEAR-Bench, a sound-centric manipulation benchmark, demonstrating the importance of causal persistence and temporal learning for robust sound-centric manipulation.
Robots can now use real-time environmental sounds to guide manipulation tasks, thanks to a new framework that overcomes the "Blind Execution Interval" of traditional vision-language-action models.
While recent Vision-Language-Action (VLA) models have begun to incorporate audio, they typically treat sound as static pre-execution prompts or focus exclusively on human speech. This leaves a significant gap in real-time, sound-centric manipulation where fleeting environmental acoustics provide critical state verification during task execution. Consequently, key sounds are easily missed due to low-frequency updates or system latency. This problem is exacerbated by action chunking with open-loop execution, which creates a Blind Execution Interval where acoustic events are lost between discrete audio observation windows. Recognizing the necessity of continuous auditory awareness, we formalize Vision-Sound-Language-Action (VSLA) as a continuous control paradigm conditioned on vision, streaming audio, language, and proprioception under delayed decision loops. As an instantiation, we introduce HEAR, a VSLA framework integrating four components: (i) a streaming Historizer to maintain a compact, causal audio context across execution gaps; (ii) an Envisioner adapted from omni foundation models to reason over multi-sensory inputs; (iii) an Advancer, formulated as an audio world model, to learn temporal dynamics by predicting near-future audio codes; and (iv) a flow-matching Realizer policy to generate smooth action chunks. To address the scarcity of pretraining data and evaluations for VSLA, we construct OpenX-Sound for pretraining, alongside HEAR-Bench, the first sound-centric manipulation benchmark with strict causal timing rules. Our results suggest that robust sound-centric manipulation necessitates causal persistence and explicit temporal learning. This framework provides a practical step toward multi-sensory foundation models for embodied agents, enabling robots to perceive and interact with dynamic environments. Code and videos are available at https://hear.irmv.top.