Search papers, labs, and topics across Lattice.
The paper addresses the challenge of concurrently running DNN training and inference on edge accelerators like Nvidia Jetson, which lack native GPU sharing capabilities. They formulate an optimization problem to interleave training and inference minibatches, dynamically adjusting device power mode and inference minibatch size to maximize training throughput while adhering to latency and power constraints. They introduce GMD, a gradient descent search, and ALS, an active learning technique, to efficiently profile power modes and identify Pareto-optimal configurations.
Achieve near-optimal throughput for concurrent DNN training and inference on edge devices by intelligently time-slicing workloads and dynamically adjusting power modes, even with limited profiling.
The proliferation of GPU accelerated edge devices like Nvidia Jetsons and the rise in privacy concerns are placing an emphasis on concurrent DNN training and inferencing on edge devices. Inference and training have different computing and QoS goals. But edge accelerators like Jetson do not support native GPU sharing and expose 1000s of power modes. This requires careful time-sharing of concurrent workloads to meet power--performance goals, while limiting costly profiling. In this paper, we design an intelligent time-slicing approach for concurrent DNN training and inferencing on Jetsons. We formulate an optimization problem to interleave training and inferencing minibatches, and decide the device power mode and inference minibatch size, while maximizing the training throughput and staying within latency and power budgets, with modest profiling costs. We propose GMD, an efficient multi-dimensional gradient descent search which profiles just $15$ power modes; and ALS, an Active Learning technique which identifies reusable Pareto-optimal power modes, but profiles $50$--$150$ power modes. We evaluate these within our Fulcrum scheduler for $273,000+$ configurations across $15$ DNN workloads. We also evaluate our strategies on dynamic arrival inference and concurrent inferences. ALS and GMD outperform simpler and more complex baselines with larger-scale profiling. Their solutions satisfy the latency and power budget for $>97\%$ of our runs, and on average are within $7\%$ of the optimal throughput.