Search papers, labs, and topics across Lattice.
1
0
3
Forget token counting: this work introduces a semantic prior based on surprisal to compress LLM reasoning traces, achieving better accuracy and fluency than heuristic length penalties.