Search papers, labs, and topics across Lattice.
1
0
3
LLM safety failures aren't always about the prompt—exploring diverse model outputs for a fixed prompt can drive jailbreak success rates close to 100%.