Search papers, labs, and topics across Lattice.
The paper introduces IDRL, a novel framework for multimodal depression detection that addresses inter-modal inconsistency and individual differences in depressive presentations. IDRL disentangles multimodal representations into modality-common, modality-specific, and depression-unrelated spaces to enhance modality alignment and suppress irrelevant information. It also employs an individual-aware modality-fusion module (IAF) to dynamically adjust the weights of disentangled depression-related features based on their predictive significance, enabling adaptive cross-modal fusion.
Achieve superior depression detection by disentangling multimodal representations and adaptively fusing them based on individual characteristics.
Depression is a severe mental disorder, and reliable identification plays a critical role in early intervention and treatment. Multimodal depression detection aims to improve diagnostic performance by jointly modeling complementary information from multiple modalities. Recently, numerous multimodal learning approaches have been proposed for depression analysis; however, these methods suffer from the following limitations: 1) inter-modal inconsistency and depression-unrelated interference, where depression-related cues may conflict across modalities while substantial irrelevant content obscures critical depressive signals, and 2) diverse individual depressive presentations, leading to individual differences in modality and cue importance that hinder reliable fusion. To address these issues, we propose Individual-aware Multimodal Depression-related Representation Learning Framework (IDRL) for robust depression diagnosis. Specifically, IDRL 1) disentangles multimodal representations into a modality-common depression space, a modality-specific depression space, and a depression-unrelated space to enhance modality alignment while suppressing irrelevant information, and 2) introduces an individual-aware modality-fusion module (IAF) that dynamically adjusts the weights of disentangled depression-related features based on their predictive significance, thereby achieving adaptive cross-modal fusion for different individuals. Extensive experiments demonstrate that IDRL achieves superior and robust performance for multimodal depression detection.