Search papers, labs, and topics across Lattice.
This paper introduces a novel verifiable aggregation scheme for cross-silo federated learning that uses "intrinsic proofs" embedded within model parameters via backdoor injection. By leveraging catastrophic forgetting, these backdoors act as ephemeral verification signals that decay over time, preserving model utility while enabling efficient detection of malicious server behavior. The proposed randomized, single-verifier auditing framework achieves significant speedups (over 1000x on ResNet-18) compared to cryptographic baselines, demonstrating scalability to large models without compromising client anonymity.
Forget ZKPs: this federated learning scheme uses "self-destructing" backdoors to verify aggregation integrity, achieving 1000x speedups over traditional crypto.
While Secure Aggregation (SA) protects update confidentiality in Cross-silo Federated Learning, it fails to guarantee aggregation integrity, allowing malicious servers to silently omit or tamper with updates. Existing verifiable aggregation schemes rely on heavyweight cryptography (e.g., ZKPs, HE), incurring computational costs that scale poorly with model size. In this paper, we propose a lightweight architecture that shifts from extrinsic cryptographic proofs to \textit{Intrinsic Proofs}. We repurpose backdoor injection to embed verification signals directly into model parameters. By harnessing Catastrophic Forgetting, these signals are robust for immediate verification yet ephemeral, naturally decaying to preserve final model utility. We design a randomized, single-verifier auditing framework compatible with SA, ensuring client anonymity and preventing signal collision without trusted third parties. Experiments on SVHN, CIFAR-10, and CIFAR-100 demonstrate high detection probabilities against malicious servers. Notably, our approach achieves over $1000\times$ speedup on ResNet-18 compared to cryptographic baselines, effectively scaling to large models.