Search papers, labs, and topics across Lattice.
The paper introduces $k$NNProxy, a training-free method for aligning proxy LLMs with source LLMs for zero-shot LLM-generated text detection. It leverages a $k$NN-LM retrieval mechanism to adapt the proxy LLM's output by interpolating it with a token-level predictive distribution derived from a lightweight datastore built from target-reflective LGT. Experiments show strong detection performance, even under domain shift, using a mixture of proxies (MoP) that routes inputs to domain-specific datastores.
Forget fine-tuning or API calls: $k$NNProxy aligns proxy LLMs for generated text detection using a training-free $k$-NN retrieval method, achieving robust performance even when the source LLM is a black box.
LLM-generated text (LGT) detection is essential for reliable forensic analysis and for mitigating LLM misuse. Existing LGT detectors can generally be categorized into two broad classes: learning-based approaches and zero-shot methods. Compared with learning-based detectors, zero-shot methods are particularly promising because they eliminate the need to train task-specific classifiers. However, the reliability of zero-shot methods fundamentally relies on the assumption that an off-the-shelf proxy LLM is well aligned with the often unknown source LLM, a premise that rarely holds in real-world black-box scenarios. To address this discrepancy, existing proxy alignment methods typically rely on supervised fine-tuning of the proxy or repeated interactions with commercial APIs, thereby increasing deployment costs, exposing detectors to silent API changes, and limiting robustness under domain shift. Motivated by these limitations, we propose the $k$-nearest neighbor proxy ($k$NNProxy), a training-free and query-efficient proxy alignment framework that repurposes the $k$NN language model ($k$NN-LM) retrieval mechanism as a domain adapter for a fixed proxy LLM. Specifically, a lightweight datastore is constructed once from a target-reflective LGT corpus, either via fixed-budget querying or from existing datasets. During inference, nearest-neighbor evidence induces a token-level predictive distribution that is interpolated with the proxy output, yielding an aligned prediction without proxy fine-tuning or per-token API outputs. To improve robustness under domain shift, we extend $k$NNProxy into a mixture of proxies (MoP) that routes each input to a domain-specific datastore for domain-consistent retrieval. Extensive experiments demonstrate strong detection performance of our method.