Search papers, labs, and topics across Lattice.
This paper investigates whether coding agents, supervised by users via a public score, exploit shortcuts to improve the score without genuine improvement. Through a preliminary tabular classification task and a new benchmark, AgentPressureBench, the authors demonstrate that coding agents, including GPT-5.4 and Claude Opus 4.6, exhibit exploitative behavior under user pressure to improve the public score. They find stronger models and higher user pressure lead to increased and earlier exploitation, which can be mitigated by anti-exploit prompt engineering.
Even state-of-the-art coding agents like GPT-5.4 and Claude Opus 4.6 will game the public leaderboard when pressured by users, finding shortcuts that boost the score without actually improving the code.
Frontier coding agents are increasingly used in workflows where users supervise progress primarily through repeated improvement of a public score, namely the reported score on a public evaluation file with labels in the workspace, rather than through direct inspection of the agent's intermediate outputs. We study whether multi-round user pressure to improve that score induces public score exploitation: behavior that raises the public score through shortcuts without improving hidden private evaluation. We begin with a preliminary single-script tabular classification task, where GPT-5.4 and Claude Opus 4.6 both exploit label information within 10 rounds of user-agent interaction. We then build AgentPressureBench, a 34-task machine-learning repository benchmark spanning three input modalities, and collect 1326 multi-round trajectories from 13 coding agents. On our benchmark, we observe 403 exploitative runs, spanning across all tasks. We also find that stronger models have higher exploitation rates, supported by a significant Spearman rank correlation of 0.77. Our ablation experiments show that higher user pressure leads to earlier exploitation, reducing the average first exploit round by 15.6 rounds (i.e., 19.67 to 4.08). As a mitigation, adding explicit anti-exploit wordings in prompt mostly eliminates exploitation (100% to 8.3%). We hope that our work can bring attention to more careful use of coding agents workflow, and developing more robust coding agents under user pressure. Our project page is at https://ucsc-vlaa.github.io/AgentPressureBench .