Search papers, labs, and topics across Lattice.
2
0
4
Hybrid-thinking LLMs can be dramatically improved by simply separating the feed-forward pathways for reasoning and non-reasoning modes, leading to less leakage and better accuracy.
LLMs may ace the test, but their uncertainty estimates are far from perfect, raising serious concerns about their reliability in high-stakes educational assessments.