Search papers, labs, and topics across Lattice.
This paper introduces FedSpy-LLM, a novel data reconstruction attack against federated LLMs trained with PEFT, addressing limitations of prior attacks in batch size, sequence length, and model architecture generalizability. FedSpy-LLM leverages a gradient decomposition strategy that exploits rank deficiency and subspace structure to extract tokens efficiently while preserving key signal components, even with the challenges posed by PEFT's null space. Experiments demonstrate FedSpy-LLM's effectiveness across diverse model architectures (encoder, decoder, encoder-decoder) and its ability to reconstruct longer sequences and larger batches compared to existing attacks.
Turns out, federated learning with PEFT doesn't protect your LLM training data as well as you thought: FedSpy-LLM can reconstruct surprisingly long sequences from shared gradients, even across different model architectures.
Given the growing reliance on private data in training Large Language Models (LLMs), Federated Learning (FL) combined with Parameter-Efficient Fine-Tuning (PEFT) has garnered significant attention for enhancing privacy and efficiency. Despite FL's privacy benefits, prior studies have shown that private data can still be extracted from shared gradients. However, these studies, mainly on full-parameter model training, are limited to reconstructing small batches, short input sequences, and specific model architectures, such as encoder-based or decoder-based models. The reconstruction quality becomes even worse when dealing with gradients from PEFT methods. To fully understand the practical attack surface of federated LLMs, this paper proposes FedSpy-LLM, a scalable and generalizable data reconstruction attack designed to reconstruct training data with larger batch sizes and longer sequences while generalizing across diverse model architectures, even when PEFT methods are deployed for training. At the core of FedSpy-LLM is a novel gradient decomposition strategy that exploits the rank deficiency and subspace structure of gradients, enabling efficient token extraction while preserving key signal components at scale. This approach further mitigates the reconstruction challenges introduced by PEFT's substantial null space, ensuring robustness across encoder-based, decoder-based, and encoder-decoder model architectures. Additionally, by iteratively aligning each token's partial-sequence gradient with the full-sequence gradient, FedSpy-LLM ensures accurate token ordering in reconstructed sequences.