Search papers, labs, and topics across Lattice.
HybridStitch accelerates text-to-image diffusion models by selectively applying a smaller, faster model to easier-to-render image regions while reserving a larger, more capable model for complex areas. This approach treats image generation as an editing process, using the small model for a coarse sketch and the large model for refinement. Experiments on Stable Diffusion 3 demonstrate a 1.83x speedup compared to other mixture-of-model techniques, without sacrificing generation quality.
Achieve nearly 2x faster text-to-image diffusion by intelligently stitching together large and small models at both the pixel and timestep level.
Diffusion models have demonstrated a remarkable ability in Text-to-Image (T2I) generation applications. Despite the advanced generation output, they suffer from heavy computation overhead, especially for large models that contain tens of billions of parameters. Prior work has illustrated that replacing part of the denoising steps with a smaller model still maintains the generation quality. However, these methods only focus on saving computation for some timesteps, ignoring the difference in compute demand within one timestep. In this work, we propose HybridStitch, a new T2I generation paradigm that treats generation like editing. Specifically, we introduce a hybrid stage that jointly incorporates both the large model and the small model. HybridStitch separates the entire image into two regions: one that is relatively easy to render, enabling an early transition to the smaller model, and another that is more complex and therefore requires refinement by the large model. HybridStitch employs the small model to construct a coarse sketch while exploiting the large model to edit and refine the complex regions. According to our evaluation, HybridStitch achieves 1.83$\times$ speedup on Stable Diffusion 3, which is faster than all existing mixture of model methods.