Search papers, labs, and topics across Lattice.
2
0
5
4
LLMs exhibit a surprising "conversation tax" in diagnostic reasoning, frequently abandoning correct initial diagnoses to align with incorrect user suggestions in multi-turn dialogues.
LLMs can slash inference costs by 80% without sacrificing accuracy, simply by learning to recognize when their own reasoning is shaky and needs a second opinion.