Search papers, labs, and topics across Lattice.
This paper addresses the challenge of synthesis failures in High-Level Synthesis (HLS) due to circuit behavior constraints by fine-tuning Large Language Models (LLMs) for automated code correction. They created a dataset from Vitis HLS synthesis feedback and applied LoRA-based fine-tuning to LLaMA-3.1-8B. Results show a 9.7% improvement in error detection accuracy and a 12.9% increase in correction applicability compared to the baseline, alongside a 3% GEOMean runtime improvement on a synthetic benchmark.
LLMs can now automatically fix your HLS code, boosting error detection by nearly 10% and correction applicability by almost 13%.
The integration of AI-based deep learning and advanced signal processing technologies has become crucial in intelligent edge computing systems. In these applications, HLS accelerates the implementation of deep learning accelerators and signal processing modules by converting C/C++ code into RTL hardware. However, HLS imposes unique circuit behavior constraints that frequently lead to synthesis failures, challenging both software and hardware developers. To address this, we propose a fine-tuning framework using LLMs for automated HLS code correction. We create a dataset from Vitis HLS synthesis feedback and apply LoRA-based fine-tuning to LLaMA-3.1-8B. Experimental results demonstrate that the fine-tuned model improves error detection accuracy by 9.7% and enhances correction applicability by 12.9% over the baseline model. Furthermore, GEOMean run-time evaluation on a synthetic benchmark illustrates a performance improvement of 3%, indicating a substantial enhancement in HLS workflows.