Search papers, labs, and topics across Lattice.
1
0
3
LLM jailbreaks can be thwarted by actively monitoring and correcting unsafe reasoning steps *during* chain-of-thought, not just at the final output.