Search papers, labs, and topics across Lattice.
Chinese Academy of Sciences
1
0
2
LLMs can be backdoored to "think well but answer wrong," even while generating seemingly correct reasoning traces, making attacks far harder to detect.