Deep Learning-based Wildfire Smoke Detection using Uncrewed Aircraft System Imagery

Louisiana Tech University, Ruston, LA 71272, USA
IEEE International Conference on Ubiquitous Robots (UR), 2024

The proposed methodology follows a systematic approach to enhance forest fire smoke detection. Initially, a Forest Segmentator, represented by a Mask R-CNN model, is employed to conduct image segmentation, effectively extracting non-forest regions, such as sky and lake areas. Subsequently, a binary mask is generated based on the segmented image, excluding the identified Non-Forest regions. Finally, a Forest Smoke Detector, utilizing the YOLOv7 model, is applied to detect smoke specifically within the masked image, which now contains only forest regions. This sequential process ensures a focused and accurate detection of smoke specifically within the forested areas of the images.

Abstract

Recent years have seen notable advancements in wildfire smoke detection, particularly in Uncrewed Aircraft Systems (UAS)-based detection employing diverse deep learning (DL) approaches. Despite the promise exhibited by these approaches, the task of detecting smoke from UAS imagery remains challenging due to difficulties in differentiating smoke from similar phenomena such as clouds and water. This work introduces a novel DL-based method for smoke detection from UAS visual observations. The core idea involves segregating forest areas from non-forest regions, such as sky and lake, and exclusively applying smoke detection to forested areas, thus eliminating the chance of misidentifying clouds and water as smoke. Specifically, we utilized a Mask Region-Based Convolutional Neural Network (Mask R-CNN) for semantic segmentation to remove non-forest regions (e.g., sky and lake): Subsequently, a customized You Only Look Once-version 7 (YOLOv7) model was trained to detect smoke within the forest areas. The proposed method was validated on an image dataset collected from our previous prescribed burn experiment, where we extracted 246 images to train both MASK R-CNN and YOLOv7 models. Additionally, we extract another 128 images to validate and confirm the efficacy of our enhanced wildfire smoke detection approach. The test results demonstrate that our proposed approach, employing MASK R-CNN and YOLOv7 models, outperforms the YOLOv7-only model by 25.3\% in precision, 18.7\% in recall, and 45\% in mean Average Precision (mAP).

Challenges in Wildfire Smoke Detection

Wildfire smoke detection using Uncrewed Aircraft Systems (UAS) imagery faces numerous challenges, primarily due to the complex and dynamic nature of environmental backgrounds, such as clouds, water, and varying lighting conditions. Our proposed approach addresses these challenges by integrating semantic segmentation with object detection, enhancing the accuracy of smoke detection in forested areas.

Specifically, the Mask R-CNN model effectively segments non-forest regions (e.g., sky and lake), which often share spectral similarities with smoke, reducing false positives. By focusing detection efforts only on forest areas, the YOLOv7 model significantly improves precision, recall, and mean Average Precision (mAP), demonstrating superior performance over conventional single-stage detectors.

This innovative methodology not only advances wildfire smoke detection capabilities but also lays the groundwork for future integration into real-time UAS-based surveillance systems, contributing to more effective wildfire management and mitigation strategies.

Comparison of Smoke Detection Approaches

Figure: Comparison of conventional YOLOv7-only detection vs. the proposed method integrating Mask R-CNN for segmentation and YOLOv7 for smoke detection.

Misdetection Example Using YOLOv7

Example of misdetection using YOLOv7: The model incorrectly identifies non-forest regions such as sky and water as smoke, demonstrating the challenge of distinguishing smoke from similar background elements.

Article

BibTeX

@inproceedings{mahmud2024deep,
        title={Deep Learning-based Wildfire Smoke Detection using Uncrewed Aircraft System Imagery},
        author={Mahmud, Khan Raqib and Wang, Lingxiao and Liu, Xiyuan and Jiahao, L and Hassan, Sunzid},
        booktitle={2024 21st International Conference on Ubiquitous Robots (UR)},
        pages={580--587},
        year={2024},
        organization={IEEE}
      }