Search papers, labs, and topics across Lattice.
The paper introduces Reasoning-Oriented Programming (ROP), a novel attack paradigm against Large Vision-Language Models (LVLMs) that exploits vulnerabilities in compositional reasoning by chaining benign premises to induce harmful logic. ROP leverages an automated framework called \tool{} to generate semantically orthogonal and spatially isolated visual gadgets, forcing malicious logic to emerge only during late-stage reasoning, thus bypassing perception-level safety alignment. Experiments on SafeBench and MM-SafetyBench across seven state-of-the-art LVLMs, including GPT-4o and Claude 3 Sonnet, demonstrate that \tool{} consistently outperforms existing baselines in circumventing safety alignment.
LVLMs can be jailbroken by "Reasoning-Oriented Programming," which chains together harmless visual inputs to trigger harmful reasoning, much like return-oriented programming in traditional security exploits.
Large Vision-Language Models (LVLMs) undergo safety alignment to suppress harmful content. However, current defenses predominantly target explicit malicious patterns in the input representation, often overlooking the vulnerabilities inherent in compositional reasoning. In this paper, we identify a systemic flaw where LVLMs can be induced to synthesize harmful logic from benign premises. We formalize this attack paradigm as \textit{Reasoning-Oriented Programming}, drawing a structural analogy to Return-Oriented Programming in systems security. Just as ROP circumvents memory protections by chaining benign instruction sequences, our approach exploits the model's instruction-following capability to orchestrate a semantic collision of orthogonal benign inputs. We instantiate this paradigm via \tool{}, an automated framework that optimizes for \textit{semantic orthogonality} and \textit{spatial isolation}. By generating visual gadgets that are semantically decoupled from the harmful intent and arranging them to prevent premature feature fusion, \tool{} forces the malicious logic to emerge only during the late-stage reasoning process. This effectively bypasses perception-level alignment. We evaluate \tool{} on SafeBench and MM-SafetyBench across 7 state-of-the-art 0.LVLMs, including GPT-4o and Claude 3.7 Sonnet. Our results demonstrate that \tool{} consistently circumvents safety alignment, outperforming the strongest existing baseline by an average of 4.67\% on open-source models and 9.50\% on commercial models.