Search papers, labs, and topics across Lattice.
360 AI Security Lab
1
0
3
2
LVLMs can be jailbroken by "Reasoning-Oriented Programming," which chains together harmless visual inputs to trigger harmful reasoning, much like return-oriented programming in traditional security exploits.