Search papers, labs, and topics across Lattice.
Shenzhen University
1
0
3
LLMs can develop more consistent world models by predicting multiple tokens *and* anchoring those predictions to ground-truth hidden state trajectories, mitigating structural hallucinations.