Search papers, labs, and topics across Lattice.
The paper introduces MoBind, a hierarchical contrastive learning framework to learn a joint representation between IMU signals and 2D pose sequences from video, addressing challenges in filtering visual background, modeling multi-sensor IMU configurations, and achieving fine-grained temporal alignment. MoBind aligns IMU signals with skeletal motion sequences and decomposes full-body motion into local body-part trajectories for semantically grounded multi-sensor alignment. Evaluated on mRi, TotalCapture, and EgoHumans datasets, MoBind demonstrates superior performance compared to baselines in cross-modal retrieval, temporal synchronization, subject/body-part localization, and action recognition.
Achieve sub-second temporal alignment between IMU signals and video pose sequences by focusing on skeletal motion rather than raw pixels, enabling more accurate cross-modal understanding.
We aim to learn a joint representation between inertial measurement unit (IMU) signals and 2D pose sequences extracted from video, enabling accurate cross-modal retrieval, temporal synchronization, subject and body-part localization, and action recognition. To this end, we introduce MoBind, a hierarchical contrastive learning framework designed to address three challenges: (1) filtering out irrelevant visual background, (2) modeling structured multi-sensor IMU configurations, and (3) achieving fine-grained, sub-second temporal alignment. To isolate motion-relevant cues, MoBind aligns IMU signals with skeletal motion sequences rather than raw pixels. We further decompose full-body motion into local body-part trajectories, pairing each with its corresponding IMU to enable semantically grounded multi-sensor alignment. To capture detailed temporal correspondence, MoBind employs a hierarchical contrastive strategy that first aligns token-level temporal segments, then fuses local (body-part) alignment with global (body-wide) motion aggregation. Evaluated on mRi, TotalCapture, and EgoHumans, MoBind consistently outperforms strong baselines across all four tasks, demonstrating robust fine-grained temporal alignment while preserving coarse semantic consistency across modalities. Code is available at https://github.com/bbvisual/ MoBind.