Search papers, labs, and topics across Lattice.
JigsawServe is introduced as a novel serving framework designed for compound inference systems, optimizing for latency, accuracy, and cost by adaptively selecting model variants and spatially partitioning GPUs for each task. It addresses the challenges of apportioning latency/accuracy budgets and allocating resources effectively for models with varying requirements in XR and similar applications. Evaluation demonstrates JigsawServe achieves an 11.3x increase in serviceable demand and consumes only 43.3% of GPU resources while meeting accuracy and latency SLOs compared to prior work.
Squeezing 11x more performance from your datacenter GPUs is now possible for compound inference tasks, thanks to JigsawServe's adaptive model selection and fine-grained spatial partitioning.
Applications in emerging domains such as XR are being built as compound inference systems, where multiple ML models are composed in the form of a task graph to service each request. Serving these compound systems efficiently raises two questions: how to apportion end-to-end latency and accuracy budgets between different tasks in a compound inference system, and how to allocate resources effectively for different models with varying resource requirements. We present JigsawServe, the first serving framework that jointly optimizes for latency, accuracy, and cost in terms of GPU resources by adaptively choosing model variants and performing fine-grained resource allocation by spatially partitioning the GPUs for each task of a compound inference system. Analytical evaluation of a system with a large number of GPUs shows that JigsawServe can increase the maximum serviceable demand (in requests per second) by 11.3x when compared to the closest prior work. Our empirical evaluation shows that for a large range of scenarios, JigsawServe consumes only 43.3% of the available GPU resources while meeting accuracy SLOs with less than 0.6% latency SLO violations. All of the features in JigsawServe contribute to this high efficiency -- sacrificing any one feature of accuracy scaling, GPU spatial partitioning, or task-graph-informed resource budgeting significantly reduces efficiency.