Search papers, labs, and topics across Lattice.
The paper introduces Spatial Chain-of-Thought (SCoT), a framework that combines the spatial reasoning of Multimodal Large Language Models (MLLMs) with the generative capabilities of diffusion models for improved image generation. SCoT trains a diffusion model on interleaved text-coordinate instructions to enhance layout awareness and uses MLLMs as planners to generate detailed layout plans. Experiments show SCoT achieves state-of-the-art performance on image generation benchmarks and excels in complex reasoning and image editing tasks.
Unleashing diffusion models' spatial reasoning potential is now possible without expensive joint training, thanks to a clever plug-and-play framework that leverages MLLMs for layout planning.
While diffusion models have shown exceptional capabilities in aesthetic image synthesis, they often struggle with complex spatial understanding and reasoning. Existing approaches resort to Multimodal Large Language Models (MLLMs) to enhance this capability. However, they either incur high computational costs through joint training or suffer from spatial information loss when relying solely on textual prompts. To alleviate these limitations, we propose a Spatial Chain-of-Thought (SCoT) framework, a plug-and-play approach that effectively bridges the reasoning capabilities of MLLMs with the generative power of diffusion models. Specifically, we first enhance the diffusion model's layout awareness by training it on an interleaved text-coordinate instruction format. We then leverage state-of-the-art MLLMs as planners to generate comprehensive layout plans, transferring their spatial planning capabilities directly to the generation process. Extensive experiments demonstrate that our method achieves state-of-the-art performance on image generation benchmarks and significantly outperforms baselines on complex reasoning tasks, while also showing strong efficacy in image editing scenarios.