Search papers, labs, and topics across Lattice.
This paper adapts the mdok approach, originally designed for detecting machine-generated text, to the task of identifying machine-generated code in the SemEval-2026 Task 13 competition. The authors fine-tuned various LLMs on the code detection subtasks, including binary detection, generator LLM family attribution, and identification of hybrid or adversarially modified code. While the submitted systems showed competitive performance across all subtasks, a significant performance gap remains compared to the top-performing systems.
Adapting machine-generated text detection methods to code proves competitive, but current LLMs still struggle to reliably identify AI-generated code, especially when obfuscated.
Multi-domain detection of the machine-generated code snippets in various programming languages is a challenging task. SemEval-2026 Task~13 copes with this challenge in various angles, as a binary detection problem as well as attribution of the source. Specifically, its subtasks also cover generator LLM family detection, as well as a hybrid code co-generated by humans and machines, or adversarially modified codes hiding its origin. Our submitted systems adjusted the existing mdok approach (focused on machine-generated text detection) to these specific kinds of problems by exploring various base models, more suitable for code understanding. The results indicate that the submitted systems are competitive in all three subtasks. However, the margins from the top-performing systems are significant, and thus further improvements are possible.