Search papers, labs, and topics across Lattice.
The authors introduce SWD-Bench, a new benchmark for evaluating repository-level software documentation by assessing an LLM's ability to understand and implement functionalities based on the documentation. The benchmark consists of three interconnected question answering tasks: Functionality Detection, Functionality Localization, and Functionality Completion, using data mined from high-quality pull requests. Experiments using SWD-Bench reveal limitations in current documentation generation methods and demonstrate that high-quality documentation can significantly improve issue-solving rates in tools like SWE-Agent.
Current LLMs struggle to leverage software documentation for repository-level comprehension, but high-quality documentation can boost agent performance by 20%.
Software documentation is crucial for repository comprehension. While Large Language Models (LLMs) advance documentation generation from code snippets to entire repositories, existing benchmarks have two key limitations: (1) they lack a holistic, repository-level assessment, and (2) they rely on unreliable evaluation strategies, such as LLM-as-a-judge, which suffers from vague criteria and limited repository-level knowledge. To address these issues, we introduce SWD-Bench, a novel benchmark for evaluating repository-level software documentation. Inspired by documentation-driven development, our strategy evaluates documentation quality by assessing an LLM's ability to understand and implement functionalities using the documentation, rather than by directly scoring it. This is measured through function-driven Question Answering (QA) tasks. SWD-Bench comprises three interconnected QA tasks: (1) Functionality Detection, to determine if a functionality is described; (2) Functionality Localization, to evaluate the accuracy of locating related files; and (3) Functionality Completion, to measure the comprehensiveness of implementation details. We construct the benchmark, containing 4,170 entries, by mining high-quality Pull Requests and enriching them with repository-level context. Experiments reveal limitations in current documentation generation methods and show that source code provides complementary value. Notably, documentation from the best-performing method improves the issue-solving rate of SWE-Agent by 20.00%, which demonstrates the practical value of high-quality documentation in supporting documentation-driven development.