Search papers, labs, and topics across Lattice.
2
0
4
4
Forget about fine-tuning: this new prompting method lets you selectively erase knowledge from LLMs on demand, even without access to model weights.
Achieve >95% forget quality in LLMs with minimal side effects by isolating and unlearning tokens within target subdomains using asymmetric LoRA.