Search papers, labs, and topics across Lattice.
The paper introduces CAD-Coder, a Vision-Language Model (VLM) fine-tuned on a newly created GenCAD-Code dataset of 163k CAD-model image and CadQuery Python code pairs, to generate editable CAD code from visual inputs. CAD-Coder achieves 100% valid syntax and outperforms state-of-the-art VLMs like GPT-4.5 and Qwen2.5-VL-72B in 3D solid similarity. The model also demonstrates generalization capabilities by generating code from real-world images and executing unseen CAD operations.
Forget generic image-to-CAD models – CAD-Coder writes executable code with 100% valid syntax, opening the door to truly editable and customizable 3D designs.
Efficient creation of accurate and editable 3D CAD models is critical in engineering design, significantly impacting cost and time-to-market in product innovation. Current manual workflows remain highly time-consuming and demand extensive user expertise. While recent developments in AI-driven CAD generation show promise, existing models are limited by incomplete representations of CAD operations, inability to generalize to real-world images, and low output accuracy. This paper introduces CAD-Coder, an open-source Vision-Language Model (VLM) explicitly fine-tuned to generate editable CAD code (CadQuery Python) directly from visual input. Leveraging a novel dataset that we created—GenCAD-Code, consisting of over 163k CAD-model image and code pairs—CAD-Coder outperforms state-of-the-art VLM baselines such as GPT-4.5 and Qwen2.5-VL-72B, achieving a 100% valid syntax rate and the highest accuracy in 3D solid similarity. Notably, our VLM demonstrates some signs of generalizability, successfully generating CAD code from real-world images and executing CAD operations unseen during fine-tuning. The performance and adaptability of CAD-Coder highlights the potential of VLMs fine-tuned on code to streamline CAD workflows for engineers and designers. CAD-Coder is publicly available at: https://github.com/anniedoris/CAD-Coder.