Search papers, labs, and topics across Lattice.
The paper investigates how likelihood-based decoding strategies in language models create a "truncation blind spot" by excluding statistically rare but contextually appropriate tokens chosen by humans. Analyzing 1.8 million texts, they found that 8-18% of human-selected tokens fall outside the truncation boundaries of common decoding methods like top-k and nucleus sampling. Classifiers trained on predictability and lexical diversity effectively detect machine-generated text, with truncation parameters being the primary driver of detectability, outweighing model scale or architecture.
Language model text is detectable because it misses the "long tail" of human word choice, not because it's less intelligent.
Standard decoding strategies for text generation, including top-k, nucleus sampling, and contrastive search, select tokens based on likelihood, restricting selection to high-probability regions. Human language production operates differently: tokens are chosen for communicative appropriateness rather than statistical frequency. This mismatch creates a truncation blind spot: contextually appropriate but statistically rare tokens remain accessible to humans yet unreachable by likelihood-based decoding. We hypothesize this contributes to the detectability of machine-generated text. Analyzing over 1.8 million texts across eight language models, five decoding strategies, and 53 hyperparameter configurations, we find that 8-18% of human-selected tokens fall outside typical truncation boundaries. Simple classifiers trained on predictability and lexical diversity achieve remarkable detection rates. Crucially, neither model scale nor architecture correlates strongly with detectability; truncation parameters account for most variance. Configurations achieving low detectability often produce incoherent text, indicating that evading detection and producing natural text are distinct objectives. These findings suggest detectability is enhanced by likelihood-based token selection, not merely a matter of model capability.