Search papers, labs, and topics across Lattice.
The paper introduces pFedGM, a personalized federated learning method that uses Gaussian generative modeling to capture client heterogeneity in representation distributions. It trains a Gaussian generator via weighted re-sampling and employs a dual objective to balance global collaboration (maximizing inter-class distance) and local personalization (minimizing intra-class distance). pFedGM uses a dual-scale fusion framework inspired by the Kalman gain to create personalized classifier heads by modeling the global representation distribution as a prior and client-specific data as the likelihood, achieving state-of-the-art or competitive performance across various heterogeneous scenarios.
Personalized federated learning gets a Bayesian upgrade: pFedGM uses Gaussian generative modeling to capture client heterogeneity and outperforms existing methods in diverse scenarios.
Federated learning has emerged as a paradigm to train models collaboratively on inherently distributed client data while safeguarding privacy. In this context, personalized federated learning tackles the challenge of data heterogeneity by equipping each client with a dedicated model. A prevalent strategy decouples the model into a shared feature extractor and a personalized classifier head, where the latter actively guides the representation learning. However, previous works have focused on classifier head-guided personalization, neglecting the potential personalized characteristics in the representation distribution. Building on this insight, we propose pFedGM, a method based on Gaussian generative modeling. The approach begins by training a Gaussian generator that models client heterogeneity via weighted re-sampling. A balance between global collaboration and personalization is then struck by employing a dual objective: a shared objective that maximizes inter-class distance across clients, and a local objective that minimizes intra-class distance within them. To achieve this, we decouple the conventional Gaussian classifier into a navigator for global optimization, and a statistic extractor for capturing distributional statistics. Inspired by the Kalman gain, the algorithm then employs a dual-scale fusion framework at global and local levels to equip each client with a personalized classifier head. In this framework, we model the global representation distribution as a prior and the client-specific data as the likelihood, enabling Bayesian inference for class probability estimation. The evaluation covers a comprehensive range of scenarios: heterogeneity in class counts, environmental corruption, and multiple benchmark datasets and configurations. pFedGM achieves superior or competitive performance compared to state-of-the-art methods.