Search papers, labs, and topics across Lattice.
2
0
3
0
Ditch the slow lane: $R^2$-dLLM turbocharges diffusion language models by slashing decoding steps by up to 75% without sacrificing quality.
Discrete diffusion language models can now achieve higher accuracy without retraining the entire backbone, thanks to a lightweight recurrent memory module that bridges denoising steps.