Search papers, labs, and topics across Lattice.
This paper presents an empirical study of the serving performance and behavior of Reasoning Large Language Models (RLLMs) compared to general LLMs, highlighting key differences such as memory usage patterns, request stragglers, adaptive runtime, and domain preference. The authors evaluate the effectiveness of existing inference optimization techniques, finding that model quantization and speculative decoding improve efficiency with minimal accuracy loss, while prefix caching and KV cache quantization can degrade performance for smaller RLLMs. The study validates these findings through real-world workload simulations modeled by a Gamma distribution across different datasets.
Naive application of LLM inference optimizations can *hurt* the performance of smaller reasoning models, highlighting the need for RLLM-specific serving strategies.
The reasoning large language model (RLLM) has been proven competitive in solving complex reasoning tasks such as mathematics, coding, compared to general LLM. However, the serving performance and behavior of RLLM remains unexplored, which may undermine the deployment and utilization of RLLM in real-world scenario. To close this gap, in this paper, we conduct a comprehensive study of RLLM service. We first perform a pilot study on comparing the serving performance between RLLM and traditional LLM and reveal that there are several distinct differences regarding serving behavior: (1) significant memory usage and fluctuations; (2) straggler requests; (3) adaptive running time; (4) domain preference. Then we further investigate whether existing inference optimization techniques are valid for RLLM. Our main takeaways are that model quantization methods and speculative decoding can improve service system efficiency with small compromise to RLLM accuracy, while prefix caching, KV cache quantization may even degrade accuracy or serving performance for small RLLM. Lastly, we conduct evaluation under real world workload modeled by Gamma distribution to verify our findings. Empirical results of real world workload evaluation across different dataset are aligned with our main findings regarding RLLM serving. We hope our work can provide the research community and industry with insights to advance RLLM inference serving.