Search papers, labs, and topics across Lattice.
The paper introduces Diffusion Mental Averages (DMA), a novel model-centric approach for generating sharp, realistic "mental averages" of concepts within diffusion models. DMA optimizes noise latents to align denoising trajectories, achieving progressive convergence towards shared semantics across timesteps. By clustering samples in CLIP space and using Textual Inversion/LoRA, DMA extends to multimodal concepts, producing visual summaries and revealing model biases.
Forget blurry averages – DMA unlocks sharp, realistic concept prototypes directly within diffusion models, offering a new lens into model understanding and bias.
Can a diffusion model produce its own"mental average"of a concept-one that is as sharp and realistic as a typical sample? We introduce Diffusion Mental Averages (DMA), a model-centric answer to this question. While prior methods aim to average image collections, they produce blurry results when applied to diffusion samples from the same prompt. These data-centric techniques operate outside the model, ignoring the generative process. In contrast, DMA averages within the diffusion model's semantic space, as discovered by recent studies. Since this space evolves across timesteps and lacks a direct decoder, we cast averaging as trajectory alignment: optimize multiple noise latents so their denoising trajectories progressively converge toward shared coarse-to-fine semantics, yielding a single sharp prototype. We extend our approach to multimodal concepts (e.g., dogs with many breeds) by clustering samples in semantically-rich spaces such as CLIP and applying Textual Inversion or LoRA to bridge CLIP clusters into diffusion space. This is, to our knowledge, the first approach that delivers consistent, realistic averages, even for abstract concepts, serving as a concrete visual summary and a lens into model biases and concept representation.