Search papers, labs, and topics across Lattice.
This paper introduces Redundant Chain-of-Thought (R-CoT), a novel watermarking framework that embeds watermarks into the reasoning path of LLMs, rather than modifying the output distribution directly. R-CoT uses a dual-trajectory optimization mechanism based on GRPO to allow both native and watermarked reasoning paths to coexist within the model. Experiments demonstrate that R-CoT achieves >95% true positive rate even after fine-tuning, showing robustness compared to existing watermarking methods.
Watermarking LLMs by embedding the signal into the reasoning process itself proves surprisingly robust against fine-tuning and other post-training modifications.
Large language models (LLMs) are widely deployed in multiple scenarios due to reasoning capabilities. In order to prevent the models from being misused, watermarking is generally employed to ensure ownership. However, most existing watermarking methods rely on superficial modifications to the model's output distribution, rendering the watermark vulnerable to perturbation and removal. To overcome this challenge, this paper introduces a reasoning-layer framework termed Redundant Chain-of-Thought (R-CoT), which embeds watermarks into the reasoning path. A dual-trajectory optimization mechanism based on GRPO enables the native and the watermark reasoning path to coexist within a shared parameter space, internalizing the watermark as a distinct reasoning policy. Therefore, the watermark is embedded into the model's stable reasoning path, avoiding the watermark failure caused by output-level perturbations. Experimental results show that, compared with existing methods, R-CoT achieves high watermark effectiveness and strong robustness. Under fine-tuning and other post-training operations, the true positive rate (TPR) consistently remains above 95%, exhibiting only marginal degradation.