Search papers, labs, and topics across Lattice.
This paper introduces MPD, a dual-stage framework to mitigate hallucinations in LVLMs by disentangling and extracting hallucination components from hidden representations. MPD uses semantic-aware component disentanglement to isolate pure hallucination components and interpretable parameter updates to selectively modify parameters relevant to hallucination generation. Experiments show MPD reduces hallucinations by 23.4% while maintaining 97.4% of general generative capability, outperforming existing representation-based methods without additional computational cost.
Hallucination mitigation in LVLMs doesn't have to come at the cost of general performance: MPD reduces hallucinations by 23.4% while *improving* overall generative capabilities.
Large Vision-Language Models (LVLMs) exhibit powerful generative capabilities but frequently produce hallucinations that compromise output reliability. Fine-tuning on annotated data devoid of hallucinations offers the most direct solution, while its high computational cost motivates recent representation-based methods, which focus on mitigating hallucinatory components within hidden representations. Though efficient, we empirically observe that these methods degrade general generation capacity due to incomplete extraction of hallucination components and non-selective parameter updates. To address these limitations, we propose MPD, a dual-stage framework for mitigating hallucinations without performance degradation. Specifically, our MPD relies on two essential factors: (1) semantic-aware component disentanglement to extract pure hallucination components, and (2) interpretable parameter updates that selectively modify parameters most relevant to hallucination. Extensive experiments demonstrate that MPD achieves state-of-the-art performance, reducing hallucinations by 23.4\% while maintaining 97.4\% of general generative capability as evaluated on LLaVA-Bench and MME, with no additional computational cost.