Search papers, labs, and topics across Lattice.
This paper introduces a novel adversarial prompt injection attack against multimodal large language models (MLLMs) that uses imperceptible visual prompts. The attack method adaptively embeds malicious instructions into an image via a bounded text overlay and optimizes imperceptible visual perturbations to align the image's feature representation with malicious visual and textual targets at both coarse- and fine-grained levels. Experiments across multiple closed-source MLLMs demonstrate the superior performance of this approach compared to existing prompt injection methods.
MLLMs are more vulnerable than we thought: imperceptible visual prompts can effectively hijack their behavior.
Although multimodal large language models (MLLMs) are increasingly deployed in real-world applications, their instruction-following behavior leaves them vulnerable to prompt injection attacks. Existing prompt injection methods predominantly rely on textual prompts or perceptible visual prompts that are observable by human users. In this work, we study imperceptible visual prompt injection against powerful closed-source MLLMs, where adversarial instructions are embedded in the visual modality. Our method adaptively embeds the malicious prompt into the input image via a bounded text overlay to provide semantic guidance. Meanwhile, the imperceptible visual perturbation is iteratively optimized to align the feature representation of the attacked image with those of the malicious visual and textual targets at both coarse- and fine-grained levels. Specifically, the visual target is instantiated as a text-rendered image and progressively refined during optimization to more faithfully represent the desired semantics and improve transferability. Extensive experiments on two multimodal understanding tasks across multiple closed-source MLLMs demonstrate the superior performance of our approach compared to existing methods.