Search papers, labs, and topics across Lattice.
This paper introduces RW-Post, a new dataset for multimodal fact-checking that aligns real-world social media posts with human-written fact-checking articles, including detailed reasoning and evidence links. To leverage this dataset, the authors propose AgentFact, an agent-based framework comprising specialized agents for strategy planning, evidence retrieval, visual analysis, reasoning, and explanation generation. Experiments demonstrate that AgentFact, trained on RW-Post, significantly improves both the accuracy and interpretability of multimodal fact-checking compared to existing methods.
A new agent-based framework, AgentFact, substantially improves multimodal fact-checking accuracy and interpretability by emulating the human verification workflow.
The rapid spread of multimodal misinformation poses a growing challenge for automated fact-checking systems. Existing approaches, including large vision language models (LVLMs) and deep multimodal fusion methods, often fall short due to limited reasoning and shallow evidence utilization. A key bottleneck is the lack of dedicated datasets that provide complete real-world multimodal misinformation instances accompanied by annotated reasoning processes and verifiable evidence. To address this limitation, we introduce RW-Post, a high-quality and explainable dataset for real-world multimodal fact-checking. RW-Post aligns real-world multimodal claims with their original social media posts, preserving the rich contextual information in which the claims are made. In addition, the dataset includes detailed reasoning and explicitly linked evidence, which are derived from human written fact-checking articles via a large language model assisted extraction pipeline, enabling comprehensive verification and explanation. Building upon RW-Post, we propose AgentFact, an agent-based multimodal fact-checking framework designed to emulate the human verification workflow. AgentFact consists of five specialized agents that collaboratively handle key fact-checking subtasks, including strategy planning, high-quality evidence retrieval, visual analysis, reasoning, and explanation generation. These agents are orchestrated through an iterative workflow that alternates between evidence searching and task-aware evidence filtering and reasoning, facilitating strategic decision-making and systematic evidence analysis. Extensive experimental results demonstrate that the synergy between RW-Post and AgentFact substantially improves both the accuracy and interpretability of multimodal fact-checking.