Search papers, labs, and topics across Lattice.
The paper introduces SuperInfer, a novel LLM inference system tailored for superchips like NVIDIA GH200, addressing the challenge of meeting stringent latency SLOs under high request rates. SuperInfer employs RotaSched, a proactive SLO-aware rotary scheduler, to mitigate head-of-line blocking and maintain responsiveness. The system also features DuplexKV, an optimized rotation engine enabling full-duplex transfer over NVLink-C2C.
SuperInfer unlocks the potential of superchips for LLM serving by proactively rotating requests to meet stringent latency SLOs, achieving up to 74.7% improvement in Time-To-First-Token attainment.
Large Language Model (LLM) serving faces a fundamental tension between stringent latency Service Level Objectives (SLOs) and limited GPU memory capacity. When high request rates exhaust the KV cache budget, existing LLM inference systems often suffer severe head-of-line (HOL) blocking. While prior work explored PCIe-based offloading, these approaches cannot sustain responsiveness under high request rates, often failing to meet tight Time-To-First-Token (TTFT) and Time-Between-Tokens (TBT) SLOs. We present SuperInfer, a high-performance LLM inference system designed for emerging Superchips (e.g., NVIDIA GH200) with tightly coupled GPU-CPU architecture via NVLink-C2C. SuperInfer introduces RotaSched, the first proactive, SLO-aware rotary scheduler that rotates requests to maintain responsiveness on Superchips, and DuplexKV, an optimized rotation engine that enables full-duplex transfer over NVLink-C2C. Evaluations on GH200 using various models and datasets show that SuperInfer improves TTFT SLO attainment rates by up to 74.7% while maintaining comparable TBT and throughput compared to state-of-the-art systems, demonstrating that SLO-aware scheduling and memory co-design unlocks the full potential of Superchips for responsive LLM serving.