Search papers, labs, and topics across Lattice.
This paper introduces RooflineBench, a benchmarking framework for on-device LLMs based on the Roofline model, using operational intensity (OI) to unify architectural primitives and hardware constraints. They define an inference-potential region and introduce Relative Inference Potential to compare LLM efficiency on the same hardware. Empirical analysis reveals that sequence length significantly influences performance and OI, identifies OI regression with model depth, and demonstrates how structural refinements like M-LA can unlock inference potential.
On-device LLM performance is heavily influenced by sequence length and model depth, with hardware heterogeneity creating efficiency traps that can be mitigated by architectural refinements like Multi-head Latent Attention.
The transition toward localized intelligence through Small Language Models (SLMs) has intensified the need for rigorous performance characterization on resource-constrained edge hardware. However, objectively measuring the theoretical performance ceilings of diverse architectures across heterogeneous platforms remains a formidable challenge. In this work, we propose a systematic framework based on the Roofline model that unifies architectural primitives and hardware constraints through the lens of operational intensity (OI). By defining an inference-potential region, we introduce the Relative Inference Potential as a novel metric to compare efficiency differences between Large Language Models (LLMs) on the same hardware substrate. Extensive empirical analysis across diverse compute tiers reveals that variations in performance and OI are significantly influenced by sequence length. We further identify a critical regression in OI as model depth increases. Additionally, our findings highlight an efficiency trap induced by hardware heterogeneity and demonstrate how structural refinements, such as Multi-head Latent Attention (M LA), can effectively unlock latent inference potential across various hardware substrates. These insights provide actionable directions for hardware-software co-design to align neural structures with physical constraints in on-device intelligence. The released code is available in the Appendix C.