Search papers, labs, and topics across Lattice.
鈭桾ianqing Zhu is the corresponding author
2
0
5
0
LLM agents can now selectively forget sensitive information without sacrificing overall performance, thanks to a new framework that translates natural language unlearning requests into actionable prompts.
Dataset distillation, intended to compress data while preserving model performance, actually leaks sensitive information about the original training data and model architecture.