Search papers, labs, and topics across Lattice.
2
0
4
LoRA in differentially private federated learning gets a 16% accuracy boost with LA-LoRA, which mitigates gradient coupling and noise amplification.
AdamW, a popular optimizer for large models, can now be used in differentially private federated learning without sacrificing convergence speed or accuracy, thanks to a new bias-corrected and variance-stabilized variant.