Search papers, labs, and topics across Lattice.
The paper introduces FedDetox, a federated learning framework that mitigates unintended data poisoning from toxic client data during safety alignment of small language models (SLMs). FedDetox uses knowledge distillation to transfer safety alignment capabilities from large language models (LLMs) to lightweight student classifiers on edge devices, enabling on-device identification and sanitization of unsafe samples. By replacing toxic samples with refusal templates, FedDetox preserves model safety comparable to centralized baselines while maintaining general utility.
Turn toxic federated data into a safety win: FedDetox uses on-device sanitization to transform potential data poisoning into positive safety signals for small language models.
As high quality public data becomes scarce, Federated Learning (FL) provides a vital pathway to leverage valuable private user data while preserving privacy. However, real-world client data often contains toxic or unsafe information. This leads to a critical issue we define as unintended data poisoning, which can severely damage the safety alignment of global models during federated alignment. To address this, we propose FedDetox, a robust framework tailored for Small Language Models (SLMs) on resource-constrained edge devices. We first employ knowledge distillation to transfer sophisticated safety alignment capabilities from large scale safety aligned teacher models into light weight student classifiers suitable for resource constrained edge devices. Specifically, during federated learning for human preference alignment, the edge client identifies unsafe samples at the source and replaces them with refusal templates, effectively transforming potential poisons into positive safety signals. Experiments demonstrate that our approach preserves model safety at a level comparable to centralized baselines without compromising general utility.