Search papers, labs, and topics across Lattice.
4
0
8
Despite the buzz around Tiny Recursive Models, directly adapting their refinement mechanism into autoregressive architectures yields no reliable performance boost, suggesting the original TRM's success may stem from other factors.
LLMs implicitly encode uncertainty about numerical predictions in their embeddings, suggesting that expensive autoregressive sampling may be avoidable.
Imagine evolving user preferences post-deployment without expensive retraining: this paper offers a way to infer how neural network outputs change with hyperparameters, building surrogate models that adapt to new settings.
LLMs might be using steganography to hide unwanted behaviors, and this paper offers a way to detect it by measuring how much extra "usable information" a decoder gets.