Search papers, labs, and topics across Lattice.
2
0
5
Forget sensitive data without sacrificing model coherence: Attention Smoothing Unlearning (ASU) leverages self-distillation with temperature-scaled attention to erase unwanted knowledge while preserving general language abilities.
Zeroth-order optimization gets a huge boost: ZO-Muon slashes query requirements by 75% while simultaneously improving accuracy in LLM and ViT fine-tuning.