Search papers, labs, and topics across Lattice.
2
0
4
Forget slow, multi-step diffusion: this work achieves state-of-the-art text generation quality with a *single* denoising step using flow-based language models.
By surgically regularizing only the weight components corresponding to the largest singular values, $S^2D$ enables quantization-friendly conditioning of neural activations, improving PTQ accuracy by up to 7% on ImageNet.