Search papers, labs, and topics across Lattice.
1
0
3
6
LLM safety filters can be bypassed by strategically fragmenting and camouflaging malicious intent across multiple turns, achieving a 26% improvement in jailbreak success rate on GPT-5-mini.