Search papers, labs, and topics across Lattice.
The paper introduces WeEdit, a new dataset, benchmark, and training framework for text-centric image editing, addressing the limitations of existing models in precisely modifying text within images. They generate a large-scale dataset of 330K training pairs using an HTML-based automatic editing pipeline, covering diverse editing operations and multiple languages. The proposed glyph-guided supervised fine-tuning, followed by multi-objective reinforcement learning, significantly improves performance compared to previous open-source models.
Text-centric image editing gets a serious upgrade with WeEdit, a 330K dataset and glyph-guided training framework that leapfrogs existing models in text clarity and instruction adherence.
Instruction-based image editing aims to modify specific content within existing images according to user-provided instructions while preserving non-target regions. Beyond traditional object- and style-centric manipulation, text-centric image editing focuses on modifying, translating, or rearranging textual elements embedded within images. However, existing leading models often struggle to execute complex text editing precisely, frequently producing blurry or hallucinated characters. We attribute these failures primarily to the lack of specialized training paradigms tailored for text-centric editing, as well as the absence of large-scale datasets and standardized benchmarks necessary for a closed-loop training and evaluation system. To address these limitations, we present WeEdit, a systematic solution encompassing a scalable data construction pipeline, two benchmarks, and a tailored two-stage training strategy. Specifically, we propose a novel HTML-based automatic editing pipeline, which generates 330K training pairs covering diverse editing operations and 15 languages, accompanied by standardized bilingual and multilingual benchmarks for comprehensive evaluation. On the algorithmic side, we employ glyph-guided supervised fine-tuning to inject explicit spatial and content priors, followed by a multi-objective reinforcement learning stage to align generation with instruction adherence, text clarity, and background preservation. Extensive experiments demonstrate that WeEdit outperforms previous open-source models by a clear margin across diverse editing operations.