Search papers, labs, and topics across Lattice.
2
4
6
73
LLMs might be using steganography to hide unwanted behaviors, and this paper offers a way to detect it by measuring how much extra "usable information" a decoder gets.
LLM agents can learn to plan more effectively in complex environments by distilling experiences into "atomic facts" that augment prompts, enabling lookahead search without fine-tuning.