Search papers, labs, and topics across Lattice.
This paper introduces a real-time fire detection system that fuses RGB and thermal imaging within a lightweight YOLOv7 framework. The system addresses challenges like low visibility and dynamic backgrounds through feature-level fusion enhanced by an attention mechanism. By employing model compression techniques like pruning and quantization, the system achieves a mAP of 91.5% at 32 FPS on resource-limited devices, demonstrating its efficiency and suitability for real-world applications.
Achieve real-time (32 FPS) fire detection with 91.5% mAP on edge devices by fusing thermal and RGB data in a compressed YOLOv7 architecture.
Computer vision technologies are transforming safety-critical systems, providing intelligent and automated approaches to fire detection. This paper presents a novel real-time fire detection system in which a lightweight You Only Look Once version 7 (YOLOv7) framework is combined with thermal imaging and Red Green Blue (RGB) multimodal fusion. The proposed system effectively solves the problems of low visibility, dynamic background, and occlusion by using feature-level fusion enhanced by an attention mechanism, prioritizing key features while suppressing noise. To ensure real-time performance, YOLOv7 employs model compression techniques including pruning and quantization, significantly reducing computational overhead. Comprehensive experiments on both synthetic and real datasets demonstrate the system's high detection accuracy, achieving a mean average precision (mAP) of 91.5% with an average inference speed of 32 FPS on resource-limited devices. These results highlight the system's efficiency and robustness, making it suitable for challenging environments such as wildfire monitoring, smart buildings, and industrial facilities.