Search papers, labs, and topics across Lattice.
The paper introduces PROMO, a promptable virtual try-on framework based on Flow Matching Diffusion Transformers (DiT) designed for efficient and high-fidelity image synthesis. PROMO addresses the challenges of subject preservation, texture transfer, and harmonization in VTON by employing latent multi-modal conditional concatenation and self-reference mechanisms to reduce inference overhead. Experiments on standard benchmarks demonstrate that PROMO achieves superior visual fidelity compared to existing VTON and general image editing methods, while maintaining a competitive balance between quality and speed.
Flow-matching transformers with latent multi-modal conditioning and self-reference can leapfrog existing virtual try-on methods in both visual fidelity and inference speed.
Virtual Try-on (VTON) has become a core capability for online retail, where realistic try-on results provide reliable fit guidance, reduce returns, and benefit both consumers and merchants. Diffusion-based VTON methods achieve photorealistic synthesis, yet often rely on intricate architectures such as auxiliary reference networks and suffer from slow sampling, making the trade-off between fidelity and efficiency a persistent challenge. We approach VTON as a structured image editing problem that demands strong conditional generation under three key requirements: subject preservation, faithful texture transfer, and seamless harmonization. Under this perspective, our training framework is generic and transfers to broader image editing tasks. Moreover, the paired data produced by VTON constitutes a rich supervisory resource for training general-purpose editors. We present PROMO, a promptable virtual try-on framework built upon a Flow Matching DiT backbone with latent multi-modal conditional concatenation. By leveraging conditioning efficiency and self-reference mechanisms, our approach substantially reduces inference overhead. On standard benchmarks, PROMO surpasses both prior VTON methods and general image editing models in visual fidelity while delivering a competitive balance between quality and speed. These results demonstrate that flow-matching transformers, coupled with latent multi-modal conditioning and self-reference acceleration, offer an effective and training-efficient solution for high-quality virtual try-on.