Search papers, labs, and topics across Lattice.
This paper analyzes the challenges of large language model (LLM) inference, highlighting that memory capacity, memory bandwidth, and interconnect latency are now the primary bottlenecks, rather than raw compute. It argues that the autoregressive decode phase of Transformer models fundamentally differentiates LLM inference from training workloads. The paper identifies and proposes four architectural research directions to address these challenges: High Bandwidth Flash, Processing-Near-Memory, 3D memory-logic stacking, and low-latency interconnect.
Forget chasing FLOPS, the real bottleneck for LLM inference is memory and interconnect, demanding a shift in hardware design.
Large Language Model (LLM) inference is hard. The autoregressive Decode phase of the underlying Transformer model makes LLM inference fundamentally different from training. Exacerbated by recent AI trends, the primary challenges are memory and interconnect rather than compute. To address these challenges, we highlight four architecture research opportunities: High Bandwidth Flash for 10X memory capacity with HBM-like bandwidth; Processing-Near-Memory and 3D memory-logic stacking for high memory bandwidth; and low-latency interconnect to speedup communication. While our focus is datacenter AI, we also review their applicability for mobile devices.