Search papers, labs, and topics across Lattice.
This paper introduces the Hardware Quality Index (HQI) to evaluate LLMs for RTL generation, incorporating post-synthesis area, delay, and warning count. Evaluating 32 LLMs on VerilogEval and RTLLM using HQI reveals a three-tiered performance structure, with Gemini-3-Pro achieving the highest score (85.1 HQI) and 87.5% coverage. Analysis of synthesis failures highlights systematic differences between proprietary and open-weight models, suggesting open-weight models are trained on simulation-grade RTL.
LLMs generating hardware code often fail *after* synthesis, and the type of failure (elaboration errors vs. missing wrappers) systematically depends on whether the model is proprietary or open-weight.
RTL generation demands more than software code synthesis: designs must be syntactically valid, synthesizable, functionally correct, and hardware-efficient. Existing evaluations stop at functional correctness, leaving synthesizability and implementation quality unmeasured. We evaluate 32 language models on 202 Verilog tasks from VerilogEval and RTLLM, with five attempts each, scoring via the Hardware Quality Index (HQI), a 0--100 metric integrating post-synthesis area, delay, and warning count relative to expert references under a Nangate45 45\,nm flow. Three performance tiers emerge: 13 frontier models achieve Global HQI above 71, led by Gemini-3-Pro (87.5\% coverage, 85.1 HQI); 11 mid-tier models cluster at 53--68; 8 fall below 53. The capability-to-deployment gap (best-of-five vs.\ single-attempt) spans 3.8--22.1 HQI points, motivating multi-sample strategies. A tool-adjudicated taxonomy of 195 genuine synthesis failures reveals systematic divergence: proprietary models fail late through elaboration errors and synthesis timeout; open-weight models fail early through missing module wrappers and non-synthesizable constructs, consistent with training on simulation-grade rather than synthesis-grade RTL. Rankings hold across three technology libraries at Spearman~$\rho>0.99$.