Search papers, labs, and topics across Lattice.
This paper introduces a federated learning approach to train time series foundation models (TSFMs) that addresses heterogeneity at both inter-domain and intra-domain levels. They mitigate intra-domain conflicts using local regularization to enforce domain-invariant representations and address inter-domain discrepancies through domain-aware aggregation. Experiments demonstrate that TSFMs trained with this method outperform centralized and federated baselines in forecasting and achieve competitive zero-shot performance.
Domain-aware federated learning unlocks high-quality time series foundation models, even when training on highly heterogeneous datasets.
Heterogeneity in time series data is more pronounced than in vision or language, as temporal dynamics vary substantially across domains and tasks. Existing efforts on training time series foundation models (TSFMs) from scratch are often trained with mixed-batch strategies that merge large-scale datasets, which can cause gradient conflicts and degrade representation quality. To address this, we propose a fine-grained learning method that distills invariant knowledge from heterogeneous series while reducing cross-domain interference. We characterize heterogeneity at two levels: inter-domain and intra-domain. To tackle this bi-level heterogeneity, we design a federated learning method that mitigates intra-domain conflicts by enforcing domain-invariant and semantically consistent representations through local regularization, and addresses inter-domain discrepancies by enhancing cross-domain collaboration via domain-aware aggregation. Experiments across diverse benchmarks show that TSFMs trained with our method consistently outperform both centralized and federated TSFM baselines in point and probabilistic forecasting, while also achieving competitive zero-shot performance at scale, offering a flexible pathway for training TSFMs from scratch in heterogeneous environments.