Search papers, labs, and topics across Lattice.
This paper investigates the use of multimodal models (Paligemma, Llava, Qwen) for Presentation Attack Detection (PAD) on ID documents by fusing visual features with textual metadata extracted from the documents. The motivation is to improve PAD performance compared to unimodal visual systems, which are vulnerable to sophisticated attacks. Surprisingly, the experiments reveal that these multimodal models do not achieve satisfactory performance in detecting PAD on ID documents.
Multimodal models surprisingly falter when applied to presentation attack detection on ID documents, challenging the assumption that combining visual and textual data inherently improves security.
The integration of multimodal models into Presentation Attack Detection (PAD) for ID Documents represents a significant advancement in biometric security. Traditional PAD systems rely solely on visual features, which often fail to detect sophisticated spoofing attacks. This study explores the combination of visual and textual modalities by utilizing pre-trained multimodal models, such as Paligemma, Llava, and Qwen, to enhance the detection of presentation attacks on ID Documents. This approach merges deep visual embeddings with contextual metadata (e.g., document type, issuer, and date). However, experimental results indicate that these models struggle to accurately detect PAD on ID Documents.