Search papers, labs, and topics across Lattice.
The paper introduces Orchid, a new benchmark for function-level code generation designed to assess the impact of requirement ambiguity on LLMs. Using Orchid's 1,304 tasks covering lexical, syntactic, semantic, and vagueness ambiguities, the authors empirically evaluate the performance degradation of various LLMs when generating code from ambiguous requirements. Results show that ambiguity significantly reduces performance across all models, even advanced ones, and that LLMs struggle to identify or resolve such ambiguity.
LLMs' impressive code generation skills crumble when faced with the messy reality of ambiguous requirements, highlighting a critical gap in their ability to handle real-world software development scenarios.
Software requirement ambiguity is ubiquitous in real-world development, stemming from the inherent imprecision of natural language and the varying interpretations of stakeholders. While Large Language Models (LLMs) have demonstrated impressive capabilities in generating code from precise specifications, such ambiguity poses a significant obstacle to reliable automated code generation. Existing benchmarks typically assume clear and unambiguous requirements, leaving an empirical gap in understanding how LLMs behave when faced with the inherent uncertainty of real-world software requirements. In this paper, we introduce Orchid, the first code generation benchmark specifically designed with ambiguous requirements. It comprises 1,304 function-level tasks covering four distinct types of ambiguity: lexical, syntactic, semantic, and vagueness. Leveraging this dataset, we conduct the first systematic empirical study to evaluate the impact of requirement ambiguity on LLM-based code generation. Our results demonstrate that ambiguity consistently degrades the performance of all evaluated LLMs, with the most pronounced negative effects observed in highly advanced models. Furthermore, we observe that LLMs frequently produce functionally divergent implementations for the same ambiguous requirement and lack the capability to identify or resolve such ambiguity autonomously. These findings reveal a significant performance gap between clear and ambiguous requirements, underscoring the urgent need for ambiguity-aware techniques in the next generation of automated software engineering tools. The Orchid benchmark is publicly available at https://huggingface.co/datasets/SII-YDD/Orchid.