Search papers, labs, and topics across Lattice.
1
0
3
LLMs are still wide open to jailbreaks, but this new method cuts attack success rates by nearly 5x by monitoring *intermediate* reasoning steps, not just the final output.