Search papers, labs, and topics across Lattice.
1
5
3
4
LLM safety degrades significantly in multi-turn conversations with adversarial agents exhibiting diverse personalities, revealing critical vulnerabilities missed by standard single-turn benchmarks.