Search papers, labs, and topics across Lattice.
The paper introduces Hierarchical Semantic-Preserving Detoxification (HSPD), a pipeline that detoxifies raw corpora used for pre-training LLMs by rewriting toxic spans while preserving semantics. HSPD uses Soft Contrastive Decoding (SoCD) to guide an LLM in localizing and rewriting toxic content. Experiments on GPT2-XL, LLaMA2-7B, OPT-6.7B, and Falcon-7B demonstrate state-of-the-art detoxification performance, significantly reducing Toxicity Probability and Expected Maximum Toxicity.
Training LLMs on data detoxified with HSPD slashes toxicity by more than half, outperforming existing methods that only address toxicity during or after training.
Existing detoxification methods for large language models mainly focus on post-training stage or inference time, while few tackle the source of toxicity, namely, the dataset itself. Such training-based or controllable decoding approaches cannot completely suppress the model's inherent toxicity, whereas detoxifying the pretraining dataset can fundamentally reduce the toxicity that the model learns during training. Hence, we attempt to detoxify directly on raw corpora with SoCD (Soft Contrastive Decoding), which guides an LLM to localize and rewrite toxic spans in raw data while preserving semantics, in our proposed HSPD (Hierarchical Semantic-Preserving Detoxification) pipeline, yielding a detoxified corpus that can drop-in replace the original for fine-tuning or other training. On GPT2-XL, HSPD attains state-of-the-art detoxification, reducing Toxicity Probability (TP) from 0.42 to 0.18 and Expected Maximum Toxicity (EMT) from 0.43 to 0.20. We further validate consistent best-in-class results on LLaMA2-7B, OPT-6.7B, and Falcon-7B. These findings show that semantics-preserving, corpus-level rewriting with HSPD effectively suppresses downstream toxicity while retaining data utility and allowing seamless source-level mitigation, thereby reducing the cost of later model behavior adjustment. (Code is available at: https://github.com/ntsw2001/data_detox_for_llm)