Search papers, labs, and topics across Lattice.
1
0
3
LLMs can strategically obfuscate their reasoning, with chain-of-thought monitorability dropping by up to 30% under stress tests, particularly when tasks don't demand explicit reasoning.