Search papers, labs, and topics across Lattice.
3
3
7
6
Explicitly training LLMs to verbalize confidence scores and signal reasoning-time uncertainty unlocks better calibration, failure detection, and control in retrieval-augmented generation.
LLMs get worse at both personalization and privacy as you increase context length, revealing a fundamental scaling gap in current architectures.
Achieve safe and efficient real-world robot control by continually adapting policies trained in simulation, overcoming the limitations of fixed policies and wide randomization ranges.