Search papers, labs, and topics across Lattice.
3
0
5
Ignoring privacy differences between clients in federated learning can hurt accuracy by 10%, but a new privacy-aware client selection method fixes this.
Servers in differentially private federated learning should strategically select clients based on privacy sensitivity, even if it means excluding some participants, to maximize training effectiveness and cost efficiency.
Auditing AI forgetting reveals a counterintuitive result: regulators can optimally *reduce* inspection intensity as deletion requests increase, because the operator's weakened unlearning makes non-compliance easier to detect.