Search papers, labs, and topics across Lattice.
1
0
2
LLMs can become systematically over- or under-confident as they build on their own outputs in multi-turn conversations, and this "self-anchoring calibration drift" can even prevent models from becoming better calibrated.