Search papers, labs, and topics across Lattice.
SYMDIREC, a neuro-symbolic framework, addresses the challenges of RTL synthesis and summarization for LLMs by decomposing tasks into symbolic subgoals. It retrieves relevant code using a fine-tuned retriever and assembles verified outputs through LLM reasoning. The framework achieves ~20% higher Pass@1 rates for synthesis and 15-20% ROUGE-L improvements for summarization compared to prompting and RAG baselines, without requiring LLM fine-tuning.
Symbolic planning unlocks significant gains in RTL synthesis and summarization, boosting LLM performance by 20% without fine-tuning.
Register-Transfer Level (RTL) synthesis and summarization are central to hardware design automation but remain challenging for Large Language Models (LLMs) due to rigid HDL syntax, limited supervision, and weak alignment with natural language. Existing prompting and retrieval-augmented generation (RAG) methods have not incorporated symbolic planning, limiting their structural precision. We introduce SYMDIREC, a neuro-symbolic framework that decomposes RTL tasks into symbolic subgoals, retrieves relevant code via a fine-tuned retriever, and assembles verified outputs through LLM reasoning. Supporting both Verilog and VHDL without LLM fine-tuning, SYMDIREC achieves ~20% higher Pass@1 rates for synthesis and 15-20% ROUGE-L improvements for summarization over prompting and RAG baselines, demonstrating the benefits of symbolic guidance in RTL tasks.