Search papers, labs, and topics across Lattice.
1
0
2
3
Stop wasting tokens: a novel RL framework slashes LRM token generation by 40% without sacrificing accuracy by adaptively controlling reasoning length.