Search papers, labs, and topics across Lattice.
This paper details the design and results of the 2025 IEEE Low-Power Computer Vision Challenge (LPCVC), which focused on image classification, open-vocabulary segmentation, and monocular depth estimation for edge devices. The challenge utilized the Qualcomm AI Hub for standardized benchmarking across latency, memory, and energy constraints. Analysis of the winning solutions reveals key trends in efficient model design for edge deployment, offering insights for future competitions.
The LPCVC 2025 winning solutions showcase surprisingly effective strategies for balancing accuracy and efficiency in edge-based computer vision, pushing the boundaries of what's possible on resource-constrained devices.
The IEEE Low-Power Computer Vision Challenge (LPCVC) aims to promote the development of efficient vision models for edge devices, balancing accuracy with constraints such as latency, memory capacity, and energy use. The 2025 challenge featured three tracks: (1) Image classification under various lighting conditions and styles, (2) Open-Vocabulary Segmentation with Text Prompt, and (3) Monocular Depth Estimation. This paper presents the design of LPCVC 2025, including its competition structure and evaluation framework, which integrates the Qualcomm AI Hub for consistent and reproducible benchmarking. The paper also introduces the top-performing solutions from each track and outlines key trends and observations. The paper concludes with suggestions for future computer vision competitions.