Search papers, labs, and topics across Lattice.
Southeast University, China
1
0
3
Chain-of-thought prompting makes large language models smarter, but it also makes them less safe, a problem this paper tackles by forcing models to think about safety *before* reasoning.