Search papers, labs, and topics across Lattice.
FedLECC, a novel client selection strategy for federated learning, addresses the challenges of non-IID data by grouping clients based on label-distribution similarity and prioritizing those with higher local loss. This approach enables the selection of a small, informative, and diverse client set, improving convergence and model quality. Experiments under severe label skew demonstrate that FedLECC achieves up to 12% improvement in test accuracy and reduces communication rounds by 22% compared to existing methods.
FedLECC slashes communication overhead in federated learning by 50% while boosting accuracy by 12%, all by cleverly picking clients based on data similarity and loss.
Federated Learning (FL) enables distributed Artificial Intelligence (AI) across cloud-edge environments by allowing collaborative model training without centralizing data. In cross-device deployments, FL systems face strict communication and participation constraints, as well as strong non-independent and identically distributed (non-IID) data that degrades convergence and model quality. Since only a subset of devices (a.k.a clients) can participate per training round, intelligent client selection becomes a key systems challenge. This paper proposes FedLECC (Federated Learning with Enhanced Cluster Choice), a lightweight, cluster-aware, and loss-guided client selection strategy for cross-device FL. FedLECC groups clients by label-distribution similarity and prioritizes clusters and clients with higher local loss, enabling the selection of a small yet informative and diverse set of clients. Experimental results under severe label skew show that FedLECC improves test accuracy by up to 12%, while reducing communication rounds by approximately 22% and overall communication overhead by up to 50% compared to strong baselines. These results demonstrate that informed client selection improves the efficiency and scalability of FL workloads in cloud-edge systems.