Search papers, labs, and topics across Lattice.
This paper presents a large-scale behavioral analysis of real-world conversational programming in IDE-native settings, using 74,998 developer messages from 11,579 chat sessions across Cursor and GitHub Copilot. The study identifies three key shifts in programming work: progressive specification, redistribution of cognitive work to AI, and active management of the AI collaboration. The findings provide empirical insights into AI-assisted development and inform the design of future programming environments.
AI-assisted IDEs are fundamentally changing how developers work, shifting from upfront task specification to iterative refinement, and from direct code engagement to delegating diagnosis and validation to the AI.
IDE-integrated AI coding assistants, which operate conversationally within developers'working codebases with access to project context and multi-file editing, are rapidly reshaping software development. However, empirical investigation of this shift remains limited: existing studies largely rely on small-scale, controlled settings or analyze general-purpose chatbots rather than codebase-aware IDE workflows. We present, to the best of our knowledge, the first large-scale study of real-world conversational programming in IDE-native settings, analyzing 74,998 developer messages from 11,579 chat sessions across 1,300 repositories and 899 developers using Cursor and GitHub Copilot. These chats were committed to public repositories as part of routine development, capturing in-the-wild behavior. Our findings reveal three shifts in how programming work is organized: conversational programming operates as progressive specification, with developers iteratively refining outputs rather than specifying complete tasks upfront; developers redistribute cognitive work to AI, delegating diagnosis, comprehension, and validation rather than engaging with code and outputs directly; and developers actively manage the collaboration, externalizing plans into persistent artifacts, and negotiating AI autonomy through context injection and behavioral constraints. These results provide foundational empirical insights into AI-assisted development and offer implications for the design of future programming environments.