Search papers, labs, and topics across Lattice.
360 AI Security Lab
1
0
3
Text-to-image models can be tricked into generating images containing malicious text with over 90% success, even when standard jailbreak methods fail.