Search papers, labs, and topics across Lattice.
MMCORE is introduced as a unified framework for multimodal image generation and editing, using a pre-trained VLM to predict semantic visual embeddings via learnable query tokens that condition a diffusion model. This approach transfers VLM reasoning capabilities to visual generation without deep fusion or training from scratch, reducing computational cost. Experiments show MMCORE outperforms SOTA baselines in text-to-image and single/multi-image editing tasks, demonstrating strong multimodal comprehension.
Unlock high-fidelity multimodal image generation and editing with a surprisingly simple framework that leverages pre-trained VLMs and diffusion models, bypassing complex fusion architectures.
We present MMCORE, a unified framework designed for multimodal image generation and editing. MMCORE leverages a pre-trained Vision-Language Model (VLM) to predict semantic visual embeddings via learnable query tokens, which subsequently serve as conditioning signals for a diffusion model. This streamlined design effectively transfers the rich understanding and reasoning capabilities of VLMs into the visual generation process. By obviating the need for deep fusion between autoregressive and diffusion models or training from scratch, MMCORE significantly reduces computational overhead while maintaining high-fidelity synthesis. MMCORE seamlessly integrates text-to-image synthesis with interleaved image generation, demonstrating robust multimodal comprehension in complex scenarios such as spatial reasoning and visual grounding. Comprehensive evaluations indicate that MMCORE consistently outperforms state-of-the-art baselines across a broad spectrum of text-to-image and single/multi-image editing benchmarks.