Search papers, labs, and topics across Lattice.
The paper introduces VTEdit-Bench, a new benchmark for evaluating multi-reference image editing models in virtual try-on (VTON) scenarios, comprising 24,220 test image pairs across five tasks of increasing complexity. To evaluate performance, they propose VTEdit-QA, a VLM-based evaluator that assesses model consistency, cloth consistency, and image quality. Experiments comparing eight universal editing models and seven specialized VTON models reveal that while universal editors are competitive and generalize better, they struggle with complex multi-cloth conditioning.
Universal image editing models are surprisingly competitive with specialized virtual try-on systems, but still stumble when handling multiple garments simultaneously.
As virtual try-on (VTON) continues to advance, a growing number of real-world scenarios have emerged, pushing beyond the ability of the existing specialized VTON models. Meanwhile, universal multi-reference image editing models have progressed rapidly and exhibit strong generalization in visual editing, suggesting a promising route toward more flexible VTON systems. However, despite their strong capabilities, the strengths and limitations of universal editors for VTON remain insufficiently explored due to the lack of systematic evaluation benchmarks. To address this gap, we introduce VTEdit-Bench, a comprehensive benchmark designed to evaluate universal multi-reference image editing models across various realistic VTON scenarios. VTEdit-Bench contains 24,220 test image pairs spanning five representative VTON tasks with progressively increasing complexity, enabling systematic analysis of robustness and generalization. We further propose VTEdit-QA, a reference-aware VLM-based evaluator that assesses VTON performance from three key aspects: model consistency, cloth consistency, and overall image quality. Through this framework, we systematically evaluate eight universal editing models and compare them with seven specialized VTON models. Results show that top universal editors are competitive on conventional tasks and generalize more stably to harder scenarios, but remain challenged by complex reference configurations, particularly multi-cloth conditioning.