Search papers, labs, and topics across Lattice.
1
0
3
VLMs can be jailbroken with stealthy prompts by combining chain-of-thought reasoning with adaptive image noising that targets safety defenses.