Search papers, labs, and topics across Lattice.
This paper introduces STMI, a multi-modal object Re-ID framework that uses segmentation masks to guide feature modulation, reallocates semantic tokens for compact representations, and captures high-order relationships via cross-modal hypergraph interaction. The Segmentation-Guided Feature Modulation (SFM) leverages SAM-generated masks to enhance foreground representations. Experiments on RGBNT201, RGBNT100, and MSVR310 datasets demonstrate STMI's effectiveness and robustness.
By using SAM-generated masks and cross-modal hypergraphs, STMI achieves state-of-the-art results in multi-modal object Re-ID by better focusing on foreground objects and capturing complex relationships between modalities.
Multi-modal object Re-Identification (ReID) aims to exploit complementary information from different modalities to retrieve specific objects. However, existing methods often rely on hard token filtering or simple fusion strategies, which can lead to the loss of discriminative cues and increased background interference. To address these challenges, we propose STMI, a novel multi-modal learning framework consisting of three key components: (1) Segmentation-Guided Feature Modulation (SFM) module leverages SAM-generated masks to enhance foreground representations and suppress background noise through learnable attention modulation; (2) Semantic Token Reallocation (STR) module employs learnable query tokens and an adaptive reallocation mechanism to extract compact and informative representations without discarding any tokens; (3) Cross-Modal Hypergraph Interaction (CHI) module constructs a unified hypergraph across modalities to capture high-order semantic relationships. Extensive experiments on public benchmarks (i.e., RGBNT201, RGBNT100, and MSVR310) demonstrate the effectiveness and robustness of our proposed STMI framework in multi-modal ReID scenarios.