Search papers, labs, and topics across Lattice.
This paper investigates the effectiveness of fine-tuning language models for detecting AI-generated text, addressing the growing need for authenticity verification. The authors created a large-scale corpus of human and AI-generated text and introduced two novel fine-tuning strategies: Per LLM and Per LLM family fine-tuning. Their best fine-tuned detector achieved up to 99.6% token-level accuracy on a benchmark of 21 LLMs, significantly surpassing existing open-source baselines.
Fine-tuning AI-generated text detectors on a per-LLM or per-LLM-family basis skyrockets detection accuracy to nearly 100%, leaving generic detectors in the dust.
The rapid progress of large language models has enabled the generation of text that closely resembles human writing, creating challenges for authenticity verification in education, publishing, and digital security. Detecting AI-generated text has therefore become a crucial technical and ethical issue. This paper presents a comprehensive study of AI-generated text detection based on large-scale corpora and novel training strategies. We introduce a 1-billion-token corpus of human-authored texts spanning multiple genres and a 1.9-billion-token corpus of AI-generated texts produced by prompting a variety of LLMs across diverse domains. Using these resources, we develop and evaluate numerous detection models and propose two novel training paradigms: Per LLM and Per LLM family fine-tuning. Across a 100-million-token benchmark covering 21 large language models, our best fine-tuned detector achieves up to $99.6\%$ token-level accuracy, substantially outperforming existing open-source baselines.