Search papers, labs, and topics across Lattice.
2
0
4
Orthogonal parameter updates and clustered aggregation can slash communication costs by 73% while boosting performance in federated LLM fine-tuning.
Achieve Byzantine-robust, privacy-preserving federated learning without sacrificing convergence speed by letting each device autonomously optimize its neighbor selection using a GNN-RL agent.