Search papers, labs, and topics across Lattice.
The paper introduces M3D-Net, a dual-stream network for deepfake detection that reconstructs 3D facial geometry and reflectance properties from RGB images using a self-supervised module. It uses a 3D Feature Pre-fusion Module (PFM) to adaptively adjust multi-scale features and a Multi-modal Fusion Module (MFM) to integrate RGB and 3D-reconstructed features with attention. Experiments show M3D-Net achieves state-of-the-art deepfake detection accuracy and robustness on multiple datasets, outperforming existing methods.
Reconstructing 3D facial features from 2D images significantly boosts deepfake detection, achieving state-of-the-art performance and robustness.
With the rapid advancement of deep learning in image generation, facial forgery techniques have achieved unprecedented realism, posing serious threats to cybersecurity and information authenticity. Most existing deepfake detection approaches rely on the reconstruction of isolated facial attributes without fully exploiting the complementary nature of multi-modal feature representations. To address these challenges, this paper proposes a novel Multi-Modal 3D Facial Feature Reconstruction Network (M3D-Net) for deepfake detection. Our method leverages an end-to-end dual-stream architecture that reconstructs fine-grained facial geometry and reflectance properties from single-view RGB images via a self-supervised 3D facial reconstruction module. The network further enhances detection performance through a 3D Feature Pre-fusion Module (PFM), which adaptively adjusts multi-scale features, and a Multi-modal Fusion Module (MFM) that effectively integrates RGB and 3D-reconstructed features using attention mechanisms. Extensive experiments on multiple public datasets demonstrate that our approach achieves state-of-the-art performance in terms of detection accuracy and robustness, significantly outperforming existing methods while exhibiting strong generalization across diverse scenarios.