Search papers, labs, and topics across Lattice.
2
3
6
0
SpargeAttention2 achieves 95% attention sparsity in video diffusion models with a 16.2x speedup, proving that trainable sparse attention can significantly outperform training-free methods without sacrificing generation quality.
Achieve significantly more stable and consistent video world models by encoding camera-ray geometry directly into the self-attention mechanism, outperforming screen-space positional embeddings.