Search papers, labs, and topics across Lattice.
3
0
7
A surprisingly simple change to the motion latent space—representing each body joint with its own token—dramatically improves text-to-motion generation quality, outperforming monolithic latent vector approaches.
Existing safety guardrails for text-to-image models can backfire, inadvertently amplifying other types of harm, but this new method adaptively steers generation to resolve these conflicts and reduce overall harmful content.
By explicitly disentangling target features with MLLM guidance, MeGU achieves superior unlearning performance without sacrificing model utility, outperforming existing methods that struggle with the inherent entanglement of semantic concepts in model representations.