Search papers, labs, and topics across Lattice.
3
0
6
Correcting errors early in the diffusion process matters more than fixing them later: Stepwise-Flow-GRPO leverages this insight to dramatically improve RL-based flow model training.
You can drastically improve text-to-image retrieval from short, ambiguous queries by using a language model to generate richer, quality-aware descriptions.
Forget handcrafted metrics: RetouchIQ uses an RL-tuned MLLM to generate its own reward signals for instruction-based image editing, leading to more semantically consistent and perceptually pleasing results.