Search papers, labs, and topics across Lattice.
1
0
3
Erasing unwanted concepts from text-to-image models doesn't require retraining the whole U-Net — surgically misdirecting the text encoder's early self-attention layers does the trick with minimal collateral damage.