Search papers, labs, and topics across Lattice.
FlashPrefill accelerates the prefilling stage of long-context LLMs by employing a fast block-searching technique to identify dynamic sparse attention patterns (vertical, slash, and block). It introduces a dynamic thresholding mechanism that avoids sorting attention scores, effectively pruning the long-tail distribution and improving sparsity. Experiments show FlashPrefill achieves a 27.78x speedup on 256K sequences and maintains a 1.71x speedup even at a 4K context length, demonstrating its efficiency across different sequence lengths.
Forget slow attention: FlashPrefill achieves a staggering 27x speedup in long-context prefilling by instantly discovering and thresholding sparse attention patterns.
Long-context modeling is a pivotal capability for Large Language Models, yet the quadratic complexity of attention remains a critical bottleneck, particularly during the compute-intensive prefilling phase. While various sparse attention mechanisms have been explored, they typically suffer from either significant search latency or insufficient sparsity. In this paper, we propose FlashPrefill, a framework enabling ultra-fast prefilling via instantaneous pattern discovery and thresholding. FlashPrefill leverages a fast block-searching technique to simultaneously locate dynamic vertical, slash, and block-sparse attention patterns. Crucially, it introduces a dynamic thresholding mechanism that bypasses the prohibitive overhead of sorting or accumulating attention scores while effectively eliminating the long-tail distribution to enhance sparsity. Extensive evaluations demonstrate that FlashPrefill achieves a substantial leap in efficiency, delivering an unprecedented 27.78x speedup on 256K sequences. Notably, unlike existing methods that incur efficiency degradation on shorter contexts, FlashPrefill maintains a 1.71x speedup even at a 4K context length, demonstrating its robustness and practical utility across varying sequence scales.