Search papers, labs, and topics across Lattice.
1
0
3
Cutting LLMs' reasoning token budget can backfire spectacularly, tanking performance even below that of models with *no* reasoning at all.