Search papers, labs, and topics across Lattice.
FlashSampling fuses categorical sampling into the LM-head matmul, avoiding materialization of the logits tensor in HBM by computing logits tile-by-tile on-chip, adding Gumbel noise, and keeping only the maximizer per row and tile. This fused tiled kernel maintains exact sampling due to the decomposability of $\argmax$ over partitions and hierarchical factorization for grouped variants. Experiments across various NVIDIA GPUs (H100, H200, B200, B300) demonstrate speedups in kernel-level decode workloads and up to 19% reduction in time per output token in end-to-end vLLM experiments.
Exact sampling in large-vocabulary decoding can be sped up by 19% simply by fusing it into the LM-head matmul, turning a bandwidth bottleneck into a lightweight epilogue.
Sampling from a categorical distribution is mathematically simple, but in large-vocabulary decoding, it often triggers extra memory traffic and extra kernels after the LM head. We present FlashSampling, an exact sampling primitive that fuses sampling into the LM-head matmul and never materializes the logits tensor in HBM. The method is simple: compute logits tile-by-tile on chip, add Gumbel noise, keep only one maximizer per row and per vocabulary tile, and finish with a small reduction over tiles. The fused tiled kernel is exact because $\argmax$ decomposes over a partition; grouped variants for online and tensor-parallel settings are exact by hierarchical factorization of the categorical distribution. Across H100, H200, B200, and B300 GPUs, FlashSampling speeds up kernel-level decode workloads, and in end-to-end vLLM experiments, it reduces time per output token by up to $19%$ on the models we test. These results show that exact sampling, with no approximation, can be integrated into the matmul itself, turning a bandwidth-bound postprocessing step into a lightweight epilogue. Project Page: https://github.com/FlashSampling/FlashSampling.