Search papers, labs, and topics across Lattice.
State Key Laboratory of Complex & Critical Software Environment, Beihang University
1
0
3
0
LVLMs can be jailbroken by "Reasoning-Oriented Programming," which chains together harmless visual inputs to trigger harmful reasoning, much like return-oriented programming in traditional security exploits.