Search papers, labs, and topics across Lattice.
The University of Hong Kong
2
0
5
8
Scaling diffusion model alignment just got a whole lot cheaper: Sol-RL uses FP4 rollouts to accelerate training convergence by up to 4.64x without sacrificing performance.
Swap out slow, one-token-at-a-time generation in VLMs for a 6x speed boost, without sacrificing quality, using a surprisingly simple direct conversion to block-diffusion decoding.