Search papers, labs, and topics across Lattice.
1
0
2
Achieve up to 52.5% compression in LLM chain-of-thought reasoning *while improving* accuracy by dynamically calibrating CoT length.