Search papers, labs, and topics across Lattice.
Bytedance,CA,USA
1
0
3
2
Multimodal LLMs can be hijacked by adversarial instructions hidden inside seemingly innocuous images, achieving a 64% success rate in manipulating model outputs.