Search papers, labs, and topics across Lattice.
The paper introduces Sparsity-aware Head-Parallel Load Balance (S-HPLB), a novel attention deployment strategy that leverages heterogeneous sparsity elasticities across attention heads in LLMs to improve serving efficiency. S-HPLB enforces head-adaptive sparsity budgets, optimizing for both performance and inference quality. By minimizing cross-GPU resource bubbles caused by inconsistent computation times across heads with varying sparsity levels, S-HPLB achieves significant latency reductions.
Exploit the surprisingly stable, yet heterogeneous, sparsity patterns across attention heads to slash LLM attention latency by 2.88x without sacrificing quality.
With the increasing volumes of Large Language Models (LLMs) and the expanding context lengths, attention computation has become a key performance bottleneck in LLM serving. For fast attention computation, recent practices often parallelize the attention heads on multiple GPUs, and also widely adopt attention sparsification to reduce the computation amount -- which selectively computes a subset of attention pairs under a preset sparsity budget. In this paper, we notice that attention heads of an LLM model often exhibit heterogeneous-yet-stable sparsity elasticities, which motivates us to enforce head-adaptive sparsity budgets to attain better efficiency while preserving high inference quality. Yet, from the system aspect, with heterogeneous sparsity levels, attention computation time on different heads would be inconsistent, yielding cross-GPU resource bubbles under head-parallel deployment. To further minimize such bubbles, we propose a novel attention deployment strategy called Sparsity-aware Head-Parallel Load Balance (S-HPLB). Experiments on long-context benchmark show that, S-HPLB can achieve a $2.88\times$ improvement in average attention computation latency without quality degradation.