Search papers, labs, and topics across Lattice.
This paper benchmarks AI coding agent logging practices against human baselines across 4,550 pull requests in open-source repositories. The study reveals that agents change logging less frequently than humans but exhibit higher log density when they do, and that explicit logging instructions are largely ineffective. The finding that humans perform the vast majority of post-generation log repairs highlights a critical gap in current AI coding agent capabilities.
AI coding agents are surprisingly bad at logging, requiring humans to silently fix 72.5% of their logging mistakes.
Software logging is essential for maintaining and debugging complex systems, yet it remains unclear how AI coding agents handle this non-functional requirement. While prior work characterizes human logging practices, the behaviors of AI coding agents and the efficacy of natural language instructions in governing them are unexplored. To address this gap, we conduct an empirical study of 4,550 agentic pull requests across 81 open-source repositories. We compare agent logging patterns against human baselines and analyze the impact of explicit logging instructions. We find that agents change logging less often than humans in 58.4% of repositories, though they exhibit higher log density when they do. Furthermore, explicit logging instructions are rare (4.7%) and ineffective, as agents fail to comply with constructive requests 67% of the time. Finally, we observe that humans perform 72.5% of post-generation log repairs, acting as"silent janitors"who fix logging and observability issues without explicit review feedback. These findings indicate a dual failure in natural language instruction (i.e., scarcity of logging instructions and low agent compliance), suggesting that deterministic guardrails might be necessary to ensure consistent logging practices.