Search papers, labs, and topics across Lattice.
The paper introduces OpenRTLSet, a fully open-source dataset containing over 127,000 Verilog code samples derived from GitHub, VHDL translations, and synthesizable C/C++ translations. They generated natural language descriptions for each Verilog module using DeepSeek-R1, enabling fine-tuning of LLMs for Verilog code generation. Experiments using Qwen and Granite model families demonstrate the effectiveness of the dataset for hardware design tasks, exploring different context options, quantization techniques, and model sizes.
Forget proprietary restrictions: a new 127k-sample open-source dataset lets you train LLMs to generate Verilog code from natural language.
OpenRTLSet1 introduces the largest fully open-source dataset for hardware design, offering over 127,000 diverse Verilog code samples to the research community and industry. Our dataset uniquely combines Verilog code from GitHub repositories (98k modules), VHDL translations (5k modules), and synthesizable C/C++ translations (24k modules), all freely accessible without proprietary restrictions. Using the reasoning model DeepSeek-R1, we generated paired natural language descriptions for each code sample, enabling fine-tuning of various language model families (e.g., Qwen and Granite) for Verilog code generation. Our dataset explores multiple options, including Verilator-generated C++ files as additional context during labeling, quantization techniques (INT4 vs. BF16), and performance differences across model sizes (7B-32B parameters). OpenRTLSet demonstrates that open-source approaches can achieve superior performance in hardware design tasks, establishing a new foundation for accessible research and commercial use in this domain.