Search papers, labs, and topics across Lattice.
The paper introduces TactAlign, a method for transferring tactile signals from human demonstrations to robots with different embodiments, addressing the challenge of varying sensing modalities. TactAlign uses rectified flow to transform human and robot tactile observations into a shared latent space without requiring paired data or manual labels. Experiments on contact-rich tasks demonstrate that TactAlign enables effective human-to-robot policy transfer, generalizing to unseen objects and tasks, and even enabling zero-shot transfer on a dexterous light bulb screwing task.
Key contribution not extracted.
Human demonstrations collected by wearable devices (e.g., tactile gloves) provide fast and dexterous supervision for policy learning, and are guided by rich, natural tactile feedback. However, a key challenge is how to transfer human-collected tactile signals to robots despite the differences in sensing modalities and embodiment. Existing human-to-robot (H2R) approaches that incorporate touch often assume identical tactile sensors, require paired data, and involve little to no embodiment gap between human demonstrator and the robots, limiting scalability and generality. We propose TactAlign, a cross-embodiment tactile alignment method that transfers human-collected tactile signals to a robot with different embodiment. TactAlign transforms human and robot tactile observations into a shared latent representation using a rectified flow, without paired datasets, manual labels, or privileged information. Our method enables low-cost latent transport guided by hand-object interaction-derived pseudo-pairs. We demonstrate that TactAlign improves H2R policy transfer across multiple contact-rich tasks (pivoting, insertion, lid closing), generalizes to unseen objects and tasks with human data (less than 5 minutes), and enables zero-shot H2R transfer on a highly dexterous tasks (light bulb screwing).