Search papers, labs, and topics across Lattice.
3
0
6
6
Language models are increasingly doing their real work in the "invisible" latent space, not the tokens we see.
Current research agent benchmarks miss critical flaws, as MiroEval reveals that process quality is a reliable predictor of research outcome, and multimodal tasks expose weaknesses invisible to output-level metrics.
Autoregressive inference gets a potential 14x speed boost without retraining, thanks to a clever trick of reusing attention weights within semantically coherent chunks.