Search papers, labs, and topics across Lattice.
1
0
3
LRMs can often recover from injected errors in their reasoning steps, revealing a hidden "critique" ability that can be harnessed to improve performance without additional training.