Search papers, labs, and topics across Lattice.
The paper introduces NPUEval, a benchmark suite of 102 common ML operators, to evaluate the ability of LLMs to generate efficient NPU kernels. They tested a range of LLMs, including DeepSeek R1, on their ability to generate functionally correct and vectorized code for the AMD NPU, using open-source compiler tools for evaluation. Results show that while some LLMs achieve 50%+ vectorization on select kernels, the average performance remains low (around 10%), highlighting the difficulty of NPU kernel generation even for state-of-the-art LLMs.
LLMs struggle to generate efficient NPU kernels, achieving only ~10% average vectorization across a new benchmark of 102 common ML operators, despite promising results on select kernels.
Neural processing units (NPUs) are gaining prominence in power-sensitive devices like client devices, with AI PCs being defined by their inclusion of these specialized processors. Running AI workloads efficiently on these devices requires libraries of optimized kernels. Creating efficient kernels demands expertise in domain-specific C++ with vector intrinsics and in-depth knowledge of the target architecture. Unlike GPU programming, which has had years to mature, NPU programming is new, with smaller and more fragmented developer communities across hardware platforms. This fragmentation poses a challenge when utilizing LLMs to assist in writing NPU kernels, as domain-specific optimized code examples are underrepresented in LLM pre-training data. In this paper we introduce NPUEval -- a benchmark for writing and evaluating NPU kernels, consisting of 102 common operators for machine learning workloads. We evaluate LLM generated code on actual hardware based on both functional correctness and vectorization efficiency using open source compiler tools targeting the AMD NPU. We evaluate a range of state-of-the-art LLMs with a mix of proprietary and open-weight models. Latest reasoning models like DeepSeek R1, show promising results achieving out-of-the-box 50%+ vectorization on select kernels. However, the average score across the entire dataset remains roughly 10% even with compiler feedback and vectorized kernel examples -- showing that this is a challenging dataset even for frontier models. The dataset and evaluation code will be released with a permissive open source license, providing an essential benchmark for advancing research in code generation and NPU kernel optimization.