Search papers, labs, and topics across Lattice.
The paper introduces a novel method for composing object-level visual prompts within text-to-image diffusion models to generate semantically coherent compositions across diverse scenes and styles. To preserve object identity while enabling compositional flexibility, they propose a KV-mixed cross-attention mechanism that uses keys from a small-bottleneck encoder for layout control and values from a larger-bottleneck encoder for detailed appearance. Object-level compositional guidance during inference further enhances identity preservation and layout accuracy, leading to improved diversity and quality in generated compositions.
Achieve semantically coherent image compositions by mixing layout-focused and appearance-focused visual representations in a diffusion model's cross-attention.
We introduce a method for composing object-level visual prompts within a text-to-image diffusion model. Our approach addresses the task of generating semantically coherent compositions across diverse scenes and styles, similar to the versatility and expressiveness offered by text prompts. A key challenge in this task is to preserve the identity of the objects depicted in the input visual prompts, while also generating diverse compositions across different images. To address this challenge, we introduce a new KV-mixed cross-attention mechanism, in which keys and values are learned from distinct visual representations. The keys are derived from an encoder with a small bottleneck for layout control, whereas the values come from a larger bottleneck encoder that captures fine-grained appearance details. By mixing keys and values from these complementary sources, our model preserves the identity of the visual prompts while supporting flexible variations in object arrangement, pose, and composition. During inference, we further propose object-level compositional guidance to improve the method鈥檚 identity preservation and layout correctness. Results show that our technique produces diverse scene compositions that preserve the unique characteristics of each visual prompt, expanding the creative potential of text-to-image generation.