Search papers, labs, and topics across Lattice.
Diffusion Templates introduces a unified plugin framework to decouple base diffusion model inference from controllable capability injection, addressing the fragmentation of controllable diffusion methods. The framework uses Template models to map task-specific inputs to a capability representation, a Template cache for standardized capability injection, and a Template pipeline to merge caches into the base diffusion runtime. Experiments across diverse tasks like image editing and aesthetic alignment demonstrate the framework's ability to unify controllable generation while preserving modularity and composability.
Stop reinventing the wheel for every new controllable diffusion task: Diffusion Templates offers a plug-and-play framework for injecting capabilities like editing, style control, and inpainting into any diffusion backbone.
Controllable diffusion methods have substantially expanded the practical utility of diffusion models, but they are typically developed as isolated, backbone-specific systems with incompatible training pipelines, parameter formats, and runtime hooks. This fragmentation makes it difficult to reuse infrastructure across tasks, transfer capabilities across backbones, or compose multiple controls within a single generation pipeline. We present Diffusion Templates, a unified and open plugin framework that decouples base-model inference from controllable capability injection. The framework is organized around three components: Template models that map arbitrary task-specific inputs to an intermediate capability representation, a Template cache that functions as a standardized interface for capability injection, and a Template pipeline that loads, merges, and injects one or more Template caches into the base diffusion runtime. Because the interface is defined at the systems level rather than tied to a specific control architecture, heterogeneous capability carriers such as KV-Cache and LoRA can be supported under the same abstraction. Based on this design, we build a diverse model zoo spanning structural control, brightness adjustment, color adjustment, image editing, super-resolution, sharpness enhancement, aesthetic alignment, content reference, local inpainting, and age control. These case studies show that Diffusion Templates can unify a broad range of controllable generation tasks while preserving modularity, composability, and practical extensibility across rapidly evolving diffusion backbones. All resources will be open sourced, including code, models, and datasets.