Search papers, labs, and topics across Lattice.
1
0
3
13
DeepSeek LLMs are surprisingly vulnerable to prompt injection attacks that combine semantic and character-level mutations, outperforming single-strategy attacks by 12.5% in misuse success rate.