Search papers, labs, and topics across Lattice.
This paper introduces a new paradigm for scaling LLM coding agents by focusing on mastering five fundamental atomic skills: code localization, code editing, unit-test generation, issue reproduction, and code review. They train agents using joint reinforcement learning over these atomic skills, leading to consistent improvements without negative interference. Experiments show that improvements in these atomic skills generalize well to unseen composite coding tasks, resulting in an average performance increase of 18.7% across atomic and composite tasks.
Forget task-specific overfitting: training coding agents on atomic skills unlocks surprisingly broad generalization to complex software engineering tasks.
Current LLM coding agents are predominantly trained on composite benchmarks (e.g., bug fixing), which often leads to task-specific overfitting and limited generalization. To address this, we propose a novel scaling paradigm that shifts the focus from task-level optimization to atomic skill mastery. We first formalize five fundamental atomic skills, code localization, code editing, unit-test generation, issue reproduction, and code review, that serve as the basis vectors for complex software engineering tasks. Compared with composite coding tasks, these atomic skills are more generalizable and composable. Then, we scale coding agents by performing joint RL over atomic skills. In this manner, atomic skills are consistently improved without negative interference or trade-offs between them. Notably, we observe that improvements in these atomic skills generalize well to other unseen composite coding tasks, such as bug-fixing, code refactoring, machine learning engineering, and code security. The observation motivates a new scaling paradigm for coding agents by training with atomic skills. Extensive experiments demonstrate the effectiveness of our proposed paradigm. Notably, our joint RL improves average performance by 18.7% on 5 atomic skills and 5 composite tasks.