Search papers, labs, and topics across Lattice.
This paper reviews real-time object detection models, specifically one-stage detectors like SSD and YOLO, for deployment on resource-constrained embedded systems. It addresses the challenge of implementing computationally intensive object detection models on devices with limited processing power, memory, and energy. The review covers model compression techniques such as knowledge distillation, pruning, and quantization to facilitate efficient deployment.
Deploying cutting-edge object detection on embedded systems is tough, but this review highlights the best models and compression tricks to make it happen.
Computer vision relies heavily on object detection; from autonomous drones to industry monitoring it is everywhere. However, despite these advances when one wants to implement these cutting-edge object detection models in embedded system it proves to be difficult due to the limitations in processing power, memory and energy consumption. A detailed examination of real-time object detection models designed for resource-constrained devices is presented in this paper. We investigate widely used one stage detectors like SSD (Single Shot Multi Box) and YOLO (You Only Look Once). In addition, we also discuss model compression methods like knowledge distillation, pruning and quantization, which enable efficient deployment on embedded systems