Search papers, labs, and topics across Lattice.
This paper introduces a cross-modal perception framework that fuses visual and tactile data within a vision-language model to infer physical properties of objects for robotic manipulation. The framework uses a hierarchical feature alignment mechanism and refined prompting strategy to enable property-specific predictions. Experiments on 35 diverse objects demonstrate that the proposed approach outperforms existing baselines and exhibits strong zero-shot generalization capabilities in physical property inference.
Fusing vision and touch in a large language model unlocks surprisingly accurate robotic perception of object properties, even in zero-shot scenarios.
Inferring physical properties can significantly enhance robotic manipulation by enabling robots to handle objects safely and efficiently through adaptive grasping strategies. Previous approaches have typically relied on either tactile or visual data, limiting their ability to fully capture properties. We introduce a novel cross-modal perception framework that integrates visual observations with tactile representations within a multimodal vision-language model. Our physical reasoning framework, which employs a hierarchical feature alignment mechanism and a refined prompting strategy, enables our model to make property-specific predictions that strongly correlate with ground-truth measurements. Evaluated on 35 diverse objects, our approach outperforms existing baselines and demonstrates strong zero-shot generalization. Keywords: tactile perception, visual-tactile fusion, physical property inference, multimodal integration, robot perception