Search papers, labs, and topics across Lattice.
1
0
3
LLM agents can be reliably jailbroken without modifying user prompts, revealing a critical vulnerability in their reasoning and memory mechanisms.