Search papers, labs, and topics across Lattice.
This paper explores the use of LLMs (GPT-5, Claude Sonnet 4.0, Gemini 2.5 Flash Thinking, and Llama-3.1-8B-Instruct) to automate UML class diagram generation from natural language requirements. A dual-validation framework, incorporating LLM-as-a-Judge and human-in-the-loop assessment, was used to evaluate the generated diagrams across five quality dimensions. Results show that LLMs can generate coherent and meaningful UML diagrams with substantial alignment to human evaluators, suggesting their potential as both modeling assistants and reliable evaluators in requirements engineering.
LLMs can now generate UML diagrams from requirements with human-level quality, potentially automating a resource-intensive phase in software design.
The emergence of Large Language Models (LLMs) has opened new opportunities to automate software engineering activities that traditionally require substantial manual effort. Among these, class diagram generation represents a critical yet resource-intensive phase in software design. This paper investigates the capabilities of state-of-the-art LLMs, including GPT-5, Claude Sonnet 4.0, Gemini 2.5 Flash Thinking, and Llama-3.1-8B-Instruct, to generate UML class diagrams from natural language requirements automatically. To evaluate the effectiveness and reliability of LLM-based model generation, we propose a comprehensive dual-validation framework that integrates an LLM-as-a-Judge methodology with human-in-the-loop assessment. Using eight heterogeneous datasets, we apply chain-of-thought prompting to extract domain entities, attributes, and associations, generating corresponding PlantUML representations. The resulting models are evaluated across five quality dimensions: completeness, correctness, conformance to standards, comprehensibility, and terminological alignment. Two independent LLM judges (Grok and Mistral) perform structured pairwise comparisons, and their judgments are further validated against expert evaluations. Our results demonstrate that LLMs can generate structurally coherent and semantically meaningful UML diagrams, achieving substantial alignment with human evaluators. The consistency observed between LLM-based and human-based assessments highlights the potential of LLMs not only as modeling assistants but also as reliable evaluators in automated requirements engineering workflows, offering practical insights into the capabilities and limitations of LLM-driven UML class diagram automation.