Search papers, labs, and topics across Lattice.
The paper introduces Monarch-RT, a structured attention parameterization for video diffusion models using Monarch matrices, designed to address the computational bottleneck of 3D self-attention in real-time video generation. By factorizing attention with appropriately aligned block structure and a tiled Monarch parameterization, Monarch-RT achieves high expressivity and computational efficiency, outperforming existing sparse attention methods. The optimized implementation achieves up to 95% attention sparsity without quality loss and delivers significant kernel speedups (1.4-11.8X) compared to FlashAttention variants, enabling real-time video generation at 16 FPS on a single RTX 5090.
Real-time video generation just got a whole lot faster: Monarch-RT achieves up to 95% attention sparsity without quality loss and outperforms FlashAttention, finally enabling 16 FPS video generation on a single GPU.
Real-time video generation with Diffusion Transformers is bottlenecked by the quadratic cost of 3D self-attention, especially in real-time regimes that are both few-step and autoregressive, where errors compound across time and each denoising step must carry substantially more information. In this setting, we find that prior sparse-attention approximations break down, despite showing strong results for bidirectional, many-step diffusion. Specifically, we observe that video attention is not reliably sparse, but instead combines pronounced periodic structure driven by spatiotemporal position with dynamic, sparse semantic correspondences and dense mixing, exceeding the representational capacity of even oracle top-k attention. Building on this insight, we propose Monarch-RT, a structured attention parameterization for video diffusion models that factorizes attention using Monarch matrices. Through appropriately aligned block structure and our extended tiled Monarch parameterization, we achieve high expressivity while preserving computational efficiency. We further overcome the overhead of parameterization through finetuning, with custom Triton kernels. We first validate the high efficacy of Monarch-RT over existing sparse baselines designed only for bidirectional models. We further observe that Monarch-RT attains up to 95% attention sparsity with no loss in quality when applied to the state-of-the-art model Self-Forcing, making Monarch-RT a pioneering work on highly-capable sparse attention parameterization for real-time video generation. Our optimized implementation outperforms FlashAttention-2, FlashAttention-3, and FlashAttention-4 kernels on Nvidia RTX 5090, H100, and B200 GPUs respectively, providing kernel speedups in the range of 1.4-11.8X. This enables us, for the first time, to achieve true real-time video generation with Self-Forcing at 16 FPS on a single RTX 5090.