Search papers, labs, and topics across Lattice.
5
0
9
Transformers perform analogical reasoning by aligning feature representations of similar entities, but only if trained with the right curriculum.
Achieve adaptive, perception-aware image compression without any training by simply steering a pre-trained diffusion model.
FedCova sidesteps the need for clean data in federated learning by directly encoding robustness to noisy labels into the model itself via feature covariance learning.
Ignoring privacy differences between clients in federated learning can hurt accuracy by 10%, but a new privacy-aware client selection method fixes this.
Servers in differentially private federated learning should strategically select clients based on privacy sensitivity, even if it means excluding some participants, to maximize training effectiveness and cost efficiency.