Search papers, labs, and topics across Lattice.
1
0
3
Why waste tokens on reasoning when you don't need to? Selective Chain-of-Thought cuts LLM inference costs by up to 47% in medical QA, with minimal accuracy loss.