Search papers, labs, and topics across Lattice.
3
0
5
SVD-powered aggregation in FedMomentum lets LoRA modules in federated learning retain crucial training momentum, leading to faster convergence and better performance.
Slash gas costs for decentralized federated learning by using optimistic execution and validity proofs, scaling to 800 participants without compromising trust.
Personalized federated learning can boost VLN performance by up to 7.8% in trajectory fidelity and converge 1.38x faster, by selectively fusing parameters in environment-sensitive layers.