Search papers, labs, and topics across Lattice.
University of Nottingham Ningbo China
1
0
3
Text-to-image models can be tricked into generating images containing malicious text with over 90% success, even when standard jailbreak methods fail.