Search papers, labs, and topics across Lattice.
1
0
3
2
VLMs are surprisingly susceptible to jailbreak attacks using unmodified, natural images, achieving up to 90% success rate with a novel memory-augmented multi-agent framework.