Search papers, labs, and topics across Lattice.
The paper introduces Event Tensor, a compiler abstraction for dynamic megakernels that addresses kernel launch overheads and coarse synchronization limitations in GPU workloads like LLM inference. Event Tensor encodes dependencies between tiled tasks, enabling support for shape and data-dependent dynamism within fused kernels. The Event Tensor Compiler (ETC) leverages this abstraction to apply static and dynamic scheduling, achieving state-of-the-art LLM serving latency and reduced warmup overhead.
Unlock 2x faster LLM serving and slash warmup times by fusing kernels that gracefully handle dynamic shapes and data dependencies.
Modern GPU workloads, especially large language model (LLM) inference, suffer from kernel launch overheads and coarse synchronization that limit inter-kernel parallelism. Recent megakernel techniques fuse multiple operators into a single persistent kernel to eliminate launch gaps and expose inter-kernel parallelism, but struggle to handle dynamic shapes and data-dependent computation in real workloads. We present Event Tensor, a unified compiler abstraction for dynamic megakernels. Event Tensor encodes dependencies between tiled tasks, and enables first-class support for both shape and data-dependent dynamism. Built atop this abstraction, our Event Tensor Compiler (ETC) applies static and dynamic scheduling transformations to generate high-performance persistent kernels. Evaluations show that ETC achieves state-of-the-art LLM serving latency while significantly reducing system warmup overhead.