Search papers, labs, and topics across Lattice.
This paper introduces Prompt based Missing Modality Adaptation (ProMMA), a framework for robust multimodal sentiment analysis that addresses the challenge of missing modalities by evaluating their importance before imputation. ProMMA uses a Missing Modality Evaluator to dynamically assess modality importance, a Modality invariant Prompt Disentanglement module to capture local correlations, and a Dynamic Prompt Weighting module to suppress interference from missing modalities. Experiments on CMU MOSI, CMU MOSEI, and CH SIMS datasets demonstrate state-of-the-art performance and stable results under diverse missing modality settings, avoiding reliance on potentially low-quality imputed data.
Instead of blindly imputing missing data, ProMMA first evaluates the importance of each modality, leading to state-of-the-art multimodal sentiment analysis even when data is incomplete.
The missing modality problem poses a fundamental challenge in multimodal sentiment analysis, significantly degrading model accuracy and generalization in real world scenarios. Existing approaches primarily improve robustness through prompt learning and pre trained models. However, two limitations remain. First, the necessity of generating missing modalities lacks rigorous evaluation. Second, the structural dependencies among multimodal prompts and their global coherence are insufficiently explored. To address these issues, a Prompt based Missing Modality Adaptation framework is proposed. A Missing Modality Evaluator is introduced at the input stage to dynamically assess the importance of missing modalities using pretrained models and pseudo labels, thereby avoiding low quality data imputation. Building on this, a Modality invariant Prompt Disentanglement module decomposes shared prompts into modality specific private prompts to capture intrinsic local correlations and improve representation quality. In addition, a Dynamic Prompt Weighting module computes mutual information based weights from cross attention outputs to adaptively suppress interference from missing modalities. To enhance global consistency, a Multi level Prompt Dynamic Connection module integrates shared prompts with self attention outputs through residual connections, leveraging global prompt priors to strengthen key guidance features. Extensive experiments on three public benchmarks, including CMU MOSI, CMU MOSEI, and CH SIMS, demonstrate that the proposed framework achieves state of the art performance and stable results under diverse missing modality settings. The implementation is available at https://github.com/rongfei-chen/ProMMA