Next Article in Journal
Elevation Accuracy of Forest Road Maps Derived from Aerial Imaging, Airborne Laser Scanning and Mobile Laser Scanning Data
Previous Article in Journal
Impact of Malayan Uniform System and Selective Management System of Logging on Soil Quality in Selected Logged-over Forest in Johor, Malaysia
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Forest Smoke-Fire Net (FSF Net): A Wildfire Smoke Detection Model That Combines MODIS Remote Sensing Images with Regional Dynamic Brightness Temperature Thresholds

1
College of Computer and Control Engineering, Northeast Forestry University, Harbin 150040, China
2
School of Computer Science and Information Engineering, Harbin Normal University, Harbin 150025, China
3
Department of Computer Science, Durham University, Durham DH1 3LE, UK
*
Author to whom correspondence should be addressed.
Forests 2024, 15(5), 839; https://doi.org/10.3390/f15050839
Submission received: 1 April 2024 / Revised: 28 April 2024 / Accepted: 7 May 2024 / Published: 10 May 2024
(This article belongs to the Section Natural Hazards and Risk Management)

Abstract

:
Satellite remote sensing plays a significant role in the detection of smoke from forest fires. However, existing methods for detecting smoke from forest fires based on remote sensing images rely solely on the information provided by the images, overlooking the positional information and brightness temperature of the fire spots in forest fires. This oversight significantly increases the probability of misjudging smoke plumes. This paper proposes a smoke detection model, Forest Smoke-Fire Net (FSF Net), which integrates wildfire smoke images with the dynamic brightness temperature information of the region. The MODIS_Smoke_FPT dataset was constructed using a Moderate Resolution Imaging Spectroradiometer (MODIS), the meteorological information at the site of the fire, and elevation data to determine the location of smoke and the brightness temperature threshold for wildfires. Deep learning and machine learning models were trained separately using the image data and fire spot area data provided by the dataset. The performance of the deep learning model was evaluated using metric M A P , while the regression performance of machine learning was assessed with Root Mean Square Error ( R M S E ) and Mean Absolute Error ( M A E ). The selected machine learning and deep learning models were organically integrated. The results show that the Mask_RCNN_ResNet50_FPN and XGR models performed best among the deep learning and machine learning models, respectively. Combining the two models achieved good smoke detection results ( P r e c i s i o n smoke = 89.12 % ). Compared with wildfire smoke detection models that solely use image recognition, the model proposed in this paper demonstrates stronger applicability in improving the precision of smoke detection, thereby providing beneficial support for the timely detection of forest fires and applications of remote sensing.

1. Introduction

Forest fires, as one of the most severe disasters affecting the global forest ecosystem, pose a significant threat to human life and the ecosystem [1,2,3,4,5,6,7,8]. The massive amount of smoke produced during forest fires causes severe air pollution and poses a serious threat to human health [9,10,11]. However, fire smoke is one of the important signals of forest fires and plays a crucial role in forest fire detection [12,13].
In early forest fire detection tasks, forest biomass sensors were used as the main means of early fire detection [14,15]. However, fire detection based on sensor data has significant accuracy and rationality issues, and fire detection that relies solely on thermal information overlooks the important characteristics of fire smoke.
To achieve the precise identification of forest fires, researchers have turned to image recognition methods based on computer vision for smoke detection. Zhang et al. [16] addressed the lack of forest fire data by synthesizing forest fire smoke images by inserting real smoke into forest backgrounds and used these data to train a model with Faster R-CNN for detecting forest smoke. Yuan et al. [17] proposed a learning-based fuzzy smoke detection method that combines drone data with fuzzy logic, image segmentation, and the intelligent adjustment rules of an extended Kalman filter. Huang et al. [18] aimed to improve the detection capability for small-target smoke in early forest fires by proposing an improved end-to-end object detection model. This model combines a Multi-scale Context-Contrast Local Feature module (MCCL) and a Dense Pyramid Pooling Module (DPPM) to enhance the perception of small-target smoke features. Li et al. [19] addressed the issue of false negatives and false positives in smoke detection in complex scenes by improving the YOLOv5 model, adding a Coordinate Attention module to increase the accuracy of smoke detection. Additionally, to better focus on the global information of fire smoke, the SPPF module was replaced with the RFB module combined with Bi-FPN for better feature fusion. Experimental results demonstrated that the improved model achieved significant improvements in the detection of fires and smoke in realistic forest fire images. In the area of image-based forest fire smoke detection, researchers have made significant breakthroughs in accurately identifying smoke and addressing challenges in complex scenarios. However, despite the high accuracy and real-time advantages of target detection-based smoke identification, these computer vision-based detection methods face challenges, such as difficulty in multi-target processing and limited adaptability to scenes, rendering them less applicable to large-scale forest fire smoke detection tasks.
Remote sensing, with its wide coverage and high timeliness, performs well in detection tasks across areas of high spatial–temporal heterogeneity and large spatial–temporal scales. With the rise of remote sensing technology, researchers have introduced remote sensing into the task of forest fire smoke detection. Ba et al. [20] collected data on clouds, dust, haze, land, seaside, and smoke from around the world using Moderate Resolution Imaging Spectroradiometer (MODIS) data, and formed the USTC_SmokeRS dataset with true color RGB images generated from bands 1, 4, and 3. Additionally, they proposed a Convolutional Neural Network (CNN) model that combines spatial and channel attention based on remote sensing data. Experimental results showed that the accuracy of smoke classification on the USTC_SmokeRS dataset reached 92.75%, achieving accurate classification results. Li et al. [21] processed MODIS data for forest fire smoke, converting it into reflectance or brightness temperature and combining it with a multi-threshold method and BP neural network for seasonal training, classifying smoke plumes against other types of plumes under different seasonal backgrounds. Dewangan et al. [22] proposed a public dataset (FIgLib) containing 25,000 fire smoke images, addressing the impact of limited dataset sizes and unreliable data on previous deep learning methods. They introduced a new deep learning framework, SmokeyNet, which demonstrated good smoke detection performance on the FIgLib dataset.
However, clouds share similar shapes and properties with wildfire smoke, often leading to misidentification between smoke and clouds in large-scale detection [23]. Smoke and clouds both have low transparency, obscuring ground features in remote sensing images and making it difficult to distinguish ground features [24,25,26,27]. The interference from clouds poses a significant challenge to the precise identification of wildfire smoke and fire monitoring. Miao et al. [28] proposed a method using edge detection to identify smoke and clouds in satellite images based on the strip-like characteristics of smoke, showing that this method can effectively detect strip-like smoke. Although smoke and clouds share similar shapes and transparency, the temperature characteristics of smoke from wildfires are usually higher than the surrounding atmosphere and clouds [29,30,31]. The heat from fires can raise the smoke to higher altitudes, detectable by certain infrared remote sensing bands, showing temperature differences from surrounding clouds [32]. Additionally, in terms of location and shape, wildfire smoke typically originates from the ground in an inverted cone shape, while clouds are at various altitudes in the atmosphere, usually in more regular shapes like layered or rolled [33]. These different characteristics between smoke and clouds provide reliable support for the more precise identification of wildfire smoke. However, in large-scale, high-temporal–spatial constraint wildfire smoke detection tasks, researchers have not considered the combination of wildfire smoke and regional brightness temperature thresholds, leading to false negatives and false positives in long-duration smoke detection tasks. This limitation has restricted the effectiveness of remote sensing satellite images in wildfire smoke detection [34].
This paper proposes Forest Smoke-Fire Net (FSF Net), a wildfire smoke detection model, which combines MODIS remote sensing images with dynamic regional brightness temperature thresholds. This model organically integrates deep learning techniques with machine learning algorithms and introduces the fire spot brightness temperature threshold into the wildfire smoke detection task. This method fully utilizes the shape characteristics of remote sensing smoke and the land temperature characteristics (dynamic fire spot threshold) at the time of smoke generation to increase the accuracy of wildfire smoke identification. Compared to algorithms that solely use remote sensing images for wildfire smoke detection, the proposed wildfire smoke detection model is more accurate, significantly reducing false negatives and false positives in large-scale, long-duration wildfire smoke detection tasks.

2. Dataset and Data Preprocessing

2.1. Dataset Construction

In the current research, datasets combining remote sensing smoke images of forest fires with regional dynamic brightness temperature thresholds are scarce. The most commonly used datasets for smoke detection in forest fires, such as the USTC_SmokeRS dataset [20] and the FIgLib dataset [35], primarily focus on visual smoke detection without considering the brightness temperature information of the smoke’s location. To address this, by collecting and processing a large amount of MODIS remote sensing images and channel data, this paper first constructs a forest fire smoke dataset that includes both remote sensing smoke images and regional dynamic brightness temperature information, named the MODIS_Smoke_FPT dataset. The dataset comprises two parts: (1) true color images of forest fire smoke constructed based on channels 1, 3, and 4, which are used for the identification and segmentation of fire smoke; (2) based on the spatiotemporal information of the forest fire site, including elevation data, meteorological information, and fire spot threshold values, this part is used to acquire and determine the region’s brightness temperature threshold information.

2.1.1. Study Area

To enhance the broad applicability of the research, this paper collects data on large-scale forest fire events from different locations and time periods around the world to better construct the MODIS_Smoke_FPT dataset. Utilizing search engines such as Google and Baidu, a historical review of forest fire incidents worldwide was conducted, gathering information that covers six continents (excluding Antarctica). Given that large-scale forest fires frequently occur in North America, the collected smoke data is primarily focused on this region. Based on the detailed information of these events, including the type of incident, the time of occurrence, and geographical coordinates, the locations and times of forest fires are determined. Remote sensing data for the corresponding areas are downloaded via Worldview (website address). Figure 1 presents the distribution of the forest fire locations collected in this study across the globe.

2.1.2. Data Source

The MODIS sensors onboard the Terra and Aqua satellites have been widely applied in the detection of forest fires and smoke. In the decades following the launch of the Terra and Aqua satellites, satellite remote sensing has captured a vast amount of forest fire data. Additionally, the unique spectral range of the MODIS sensor has provided strong support for the accurate capture of forest fire data. Specifically, the MODIS sensor covers a broad spectral range from visible to thermal infrared wavelengths, approximately from 0.4 μm to 14.4 μm. Moreover, the MODIS sensor features 36 spectral channels designed to observe various characteristics of the Earth, including the atmosphere, clouds, bodies of water, and land surfaces. These features of the MODIS sensor offer robust support for the detection of forest fires and smoke. In constructing the dataset for this paper, we use Worldview (https://worldview.earthdata.nasa.gov/ (accessed on 1 July 2023)) to obtain and download MODIS Level-2 data. In further processing, we use bands 1, 4, and 3 to assign red, green, and blue channels, respectively, to generate true color images. We also extract the Normalized Difference Vegetation Index (NDVI) from the location to compute fire spot thresholds for use in machine learning models (the interpretation of fire spot thresholds is presented in Appendix A of this paper).

2.1.3. MODIS Wildfire Smoke Images

In the identification of remote sensing images, clouds possess characteristics similar to those of wildfire smoke. However, as the most significant interference in the task of detecting smoke from forest fires, the related properties of clouds present a substantial challenge to the accurate monitoring of wildfire smoke. Specifically, in terms of spectral characteristics, both wildfire smoke and clouds have reflective and absorptive properties in the visible and infrared wavelengths. Therefore, they may appear extremely similar in certain spectral channels. In the task of detecting wildfire smoke, which is marked by high spatiotemporal heterogeneity, extensive areas of wildfire smoke and clouds can display similar textures and colors, especially in images of lower resolution. In terms of dynamics and mobility, the propagation of wildfire smoke and clouds is influenced by wind speed and direction, showing certain similarities in their flow and changes. Based on this, in the construction of the MODIS_Smoke_FPT dataset, we strive to select images where smoke and clouds are present in the same area as much as possible. Specifically, in the same remote sensing image, there can be several situations such as smoke and clouds side by side, smoke mixed with clouds, and smoke and clouds with intervals between them. Figure 2 provides examples of wildfire smoke images where smoke and clouds coexist.
In the construction of the dataset, MODIS bands 1, 4, and 3 are assigned to the red, green, and blue channels, respectively, to generate true color images of forest fire smoke (see Figure 2). The characteristics of MODIS bands 1, 3, and 4 are presented in Table 1 [20]. To build a model for detecting smoke from forest fires, true color RGB image datasets, which can generally be produced by optical satellite sensors, were chosen. This choice was made not only because they are more universal compared to multispectral data, but also because they offer greater applicability and scalability for in-depth exploration across different models.

2.1.4. Data Annotation

In this paper, the LabelMe image annotation tool was utilized to label MODIS remote sensing smoke images. LabelMe is an open-source image annotation software that allows users to make precise annotations through a simple user interface. Using LabelMe, smoke areas within each image were segmented and labeled. This process involves identifying the smoke areas in the image and drawing boundaries around them to accurately segment smoke and non-smoke areas.
Furthermore, to enhance the accuracy and consistency of the annotations, the method of cross-annotation was employed for annotating the dataset. Cross-annotation is a data annotation method, especially common in image processing and machine learning fields, used to improve the quality and reliability of annotated data. In the cross-annotation process, different parts of the same dataset are annotated by different annotators. The main purpose of this approach is to reduce the bias and errors of individual annotators, thereby improving the overall accuracy and consistency of the dataset. Through this method, a high-quality, high-precision remote sensing smoke dataset can be generated.

2.2. Dynamic Brightness Temperature Threshold Inversion

In our previous work, a method for the inversion of dynamic fire spot thresholds was proposed to determine the regional fire spot brightness temperature thresholds under different environmental factors [32]. This method takes into account the differences in weather, vegetation status, topography, and other environmental factors of different fires to adaptively determine the fire spot brightness temperature thresholds. Compared with fixed thresholds, dynamic thresholds can better determine the brightness temperature thresholds of wildfires under different regional factors, thereby providing valuable auxiliary evidence for the identification of fire occurrences. This paper introduces the method of dynamic brightness temperature threshold inversion to calculate the dynamic brightness temperature thresholds for different regions, which can then be combined with the smoke areas determined by remote sensing images to judge the occurrence of wildfires. The process of dynamic brightness temperature threshold inversion mainly consists of the following four steps:
(1)
Channel Information Extraction: To swiftly construct the regional brightness temperature data in the MODIS_Smoke_FPT dataset, this paper developed a script for quickly exporting the 36-channel data of MODIS remote sensing image HDF files. This method allows for the rapid export of MODIS’s 36-channel data, with the specific algorithmic process shown in Table A1 in Appendix B.
(2)
Brightness Temperature Calculation: Brightness temperature refers to the equivalent temperature value exhibited by ordinary objects within a specific spectral range. This temperature is based on the object’s radiant brightness matching the radiant brightness of an ideal black body at a corresponding black body temperature. Specifically, when an object’s radiant brightness is equivalent to that of an absolute black body, the temperature of this black body can be considered the object’s brightness temperature [36,37]. Brightness temperature can be calculated using the Planck formula, which converts the emissivity of an object in a specific band (such as band 21) into the corresponding luminous temperature. The Planck formula is the core calculation method for this process, as shown in Equation (1):
T i = C 2 λ i ln ( 1 + C 1 λ i 5 R i ) ,
where T i is the temperature of the first channel (K), λ i is the central wavelength of channel i , and C 1 and C 2 are constants, C 1 = 1.19104365 × 10 6   W · m 2 , and C 2 = 1.4387685 × 10 4   μ m · K .
(3)
Otsu’s Algorithm Processes Regional Binarized Images: Otsu’s algorithm, also known as Otsu’s thresholding method, automatically determines the threshold by maximizing the inter-class variance, effectively separating the image foreground from the background. It is widely used in image processing and visual analysis [38,39]. In the experiments, Otsu’s method is utilized to differentiate between burning and non-burning areas, enhancing the role of high-temperature pixels such as flames in image segmentation. A high-pass filter (setting the threshold at Pixel ≥ 60) is applied, before binarizing the image to filter out low-temperature pixels more effectively, making the image segmentation more accurate. The result of the binarization process is shown in Figure 3, where the image is clearly divided into two parts: the low-temperature area representing the unburned region, and the high-temperature area indicating the burning region.
(4)
Identifying Anomalous Brightness Temperature Values: For the smoke areas segmented using deep learning techniques, the brightness temperature values of each pixel in the region are calculated iteratively. The brightness temperature values and their geographic coordinates (latitude and longitude) of pixels that exceed the regional brightness temperature threshold are marked. The marked anomalous brightness temperature values will serve as crucial information for assisting in the identification of wildfire smoke.
Figure 3. Differentiating fire spots using Otsu’s method, where the left part is the binary image, and the right part corresponds to the original grayscale image.
Figure 3. Differentiating fire spots using Otsu’s method, where the left part is the binary image, and the right part corresponds to the original grayscale image.
Forests 15 00839 g003

2.3. Other Information

To more accurately determine the dynamic threshold information of fire spots, elevation and meteorological information are incorporated into the machine learning model used to determine the area of fire occurrence, to assist in calculating the dynamic brightness temperature threshold of fire spots. The specific variable names for elevation and meteorological information are shown in Table 2.

3. Methods

Figure 4 presents the overall architecture of the FSF Net model. The first step involves collecting remote sensing channel information captured by the MODIS satellite sensor and using this information to draw true color images of the fire. The second step, based on the true color images, utilizes a deep learning model to identify smoke areas within them and complete semantic segmentation, mapping the identified smoke images to binary images, where the smoke areas are differentiated from other parts. At the same time, by integrating features such as the elevation information and meteorological information of the smoke areas, a machine learning model is used to dynamically calculate the brightness temperature thresholds of each smoke area. Areas exceeding the brightness temperature threshold are mapped to binary, distinguishing the brightness temperature areas from other areas. The third step overlays the binary smoke areas and brightness temperature areas pixel by pixel, based on which a final determination is made as to whether or not it is wildfire smoke.

3.1. Use Deep Learning Models to Identify Smoke Areas in Images

This paper utilizes Mask R-CNN combined with true color smoke images to achieve semantic segmentation of smoke images. Mask R-CNN is a powerful deep learning model used for image segmentation and object detection tasks [40]. Its primary goal is to achieve precise object detection and pixel-level image segmentation. It can also identify multiple objects in an image and generate high-quality segmentation masks for each object. Mask R-CNN integrates two crucial tasks: object detection and segmentation, allowing it to not only determine the location of objects within an image but also to segment the pixels of objects accurately. This combined capability makes it very useful in computer vision tasks, such as instance segmentation, semantic segmentation, and object recognition [41]. Mask R-CNN includes a backbone network and a region proposal network. The backbone network extracts features from the input image, which are then used for subsequent object detection and segmentation tasks. The Region Proposal Network (RPN) is a part of Mask R-CNN used to generate suggestions for potential object locations [42]. It slides a window over the output of the backbone network, identifying areas that may contain objects. The object detection head is responsible for detecting objects in the image and generating the location and category of bounding boxes for each detected object. The overall framework of Mask R-CNN is shown in Figure 5.

3.2. Recognize and Construct Dynamic Brightness Temperature Threshold Binarized Images Based on Machine Learning

This paper utilizes machine learning technology to determine the dynamic brightness temperature thresholds for each smoke area and binarizes the areas that exceed these thresholds, making the locations of anomalous brightness temperature areas more apparent. Following the dynamic brightness temperature threshold inversion method presented in Section 2.2, we finalize the dynamic brightness temperature thresholds for the smoke areas identified by the deep learning model and mark the pixels that exceed these regional brightness temperature thresholds. The areas containing anomalous brightness temperature information are processed through binarization. Figure 6 shows a schematic diagram of the results of binarizing the anomalous brightness temperature areas.

3.3. Identify Fire Points Based on Smoke Area Brightness Temperature Information

The smoke area segmented by deep learning technology is overlaid with the abnormal brightness temperature area identified by machine learning technology to determine whether the smoke image in this area is real smoke, thereby determining the occurrence of wildfires. This process can be calculated by Formulas (2) and (3) to express:
S F f i r e + s m o k e = P f i r e ( P f i r e 1 , P f i r e 2 , P f i r e 3 P f i r e n ) P s m o k e ( P s m o k e , P s m o k e 2 , P s m o k e 3 P s m o k e n ) ,
S F Model = P f i r e 1 P s m o k e 1 = S m o k e t u r e P f i r e 0 P s m o k e 2 = S m o k e f a l s e ,
where S F f i r e + s m o k e represents the dataset combining dynamic fire spot threshold images and smoke images, P f i r e signifies the dynamic fire spot threshold image, P s m o k e indicates the smoke image identified by deep learning. S F Model denotes the model proposed in this paper, stands for the overlay and fusion of images (the fusion process is shown in Figure 4), and P f i r e 0 represents the absence of fire spots.
By overlaying and analyzing these two types of binary images (smoke and brightness temperature), the algorithm can identify smoke areas more accurately, reducing misjudgments caused by environmental factors (such as cloud cover).

3.4. Evaluation Index

This paper primarily identifies wildfire smoke by segmenting smoke in remote sensing images and modeling regional brightness temperature threshold information. Considering the different technologies used in the two processes, appropriate evaluation metrics will be selected to choose the optimal model separately.
(1)
In the task of segmenting smoke areas in remote sensing images using deep learning models, m A P and r e c a l l are used as the evaluation metrics for the model, with the formulas for these metrics shown as (4) and (5).
m A P = 1 N i = 1 N A P i
r e c a l l = T P + F N T P
The calculation formula of A P is: A P = n ( R n R n 1 ) , T P is the number of positive class instances correctly predicted, and F N is the actual number of positive class instances incorrectly predicted as negative class.
(2)
In the task of determining regional brightness temperature thresholds by using a machine learning model, R M S E and M A E are selected as the evaluation metrics, as shown in Formulas (6) and (7).
R M S E = 1 m i = 1 m ( y i y ^ i ) 2
M A E = 1 m i = 1 m ( y i y ^ i )
(3)
In the task of finally merging the two aspects of features to identify wildfire smoke, metric P r e c i s i o n s m o k e is used to evaluate the model’s overall performance. Formula (8) serves as a part of the calculation to determine the accuracy of fire spots in each image, with the overall evaluation formula shown as (9):
P r e c i s i o n F i r e = N u m b e r f i r e T s m o k e
P r e c i s i o n s m o k e = S u m ( P r e c i s i o n F i r e ) P n u m b e r
wherein P r e c i s i o n F i r e is the accuracy of fire spots in each image, N u m b e r f i r e is the number of smoke instances with fire spots, T s m o k e is the total number of smoke instances, P r e c i s i o n s m o k e is the accuracy rate of smoke detection, and P r e c i s i o n n u m b e r is the number of images.

4. Results

4.1. Experimental Results of Smoke Area Segmentation in Remote Sensing Images

In this part of the experiment, this paper compares the effects of two state-of-the-art semantic segmentation models in wildfire area segmentation. Table 3 demonstrates the specific information of the two models (Mask_R-CNN and Cascade_Mask), using 500 sets of data for model training and 59 sets of data for model verification.
Figure 7 presents the accuracy of smoke area segmentation by two models. It is evident that under metrics m A P 75 and m A P 50 , Mask_R-CNN_ResNet50_FPN achieved the best results, with its scores for m A P 50 = 0.9722 and m A P 75 = 0.7353 .
It should be noted that the same model’s segmentation accuracy varies significantly under metrics m A P 75 and m A P 50 . This discrepancy may be due to m A P 75 being calculated at a higher IoU threshold (usually 75%). A higher IoU threshold means that the model needs to perform more accurate object detection under stricter IoU requirements, thus demanding higher precision in localization.
Figure 8 shows the change in loss values during the training process of the Mask_R-CNN_ResNet50_FPN model. In the figure, loss_cls represents “classification loss”, which measures the model’s errors in the classification part, i.e., the errors when the model attempts to predict a category or categories for a given input. loss_bbox represents “bounding box loss”, which measures the difference between the predicted position and size of the bounding box and the true values. Both types of loss values decrease as the number of iterations increases, indicating that the model is learning and improving its prediction accuracy with each iteration.
In summary, this paper will utilize the Mask_R-CNN_ResNet50_FPN model to complete the semantic segmentation of smoke areas in remote sensing images.

4.2. Wildfire Smoke Detection Visual Results Display

Figure 9 provides examples of visualized results for smoke segmentation in remote sensing images using the Mask_R-CNN_ResNet50_FPN model. The figure shows the renderings of three groups of images before and after segmentation, where (a) represents the original image, and (b) is the image after smoke segmentation has been applied to (a). The results show that smoke detection and segmentation in remote sensing images with few clouds are relatively accurate. However, in some remote sensing images with more clouds, there is a tendency to mistakenly identify clouds as smoke (in the a images of group (Figure 9A) and the b image of group (Figure 9C) in Figure 9). This indicates that relying solely on semantic segmentation technology in cloud-heavy remote sensing images makes it difficult to precisely separate smoke from clouds, which reduces the accuracy of smoke detection and wildfire detection.

4.3. Bright Temperature Threshold Identification Experimental Results

Determining the dynamic brightness temperature threshold essentially addresses a regression problem, which involves predicting continuous variable values. The determination of dynamic brightness temperature thresholds is crucial for identifying anomalous brightness temperature points and, thereby, accurately identifying smoke. Therefore, the choice of regression model is particularly important; it needs to be able to learn the variation pattern of fire spot brightness temperature data from the data, thus providing reliable regional brightness temperature threshold support for further smoke detection.
Based on the dataset constructed in this paper, leave-one-out cross-validation and full-sample validation are used to test the regression effects of five of the most commonly used machine learning models (RR, LR, SVMR, RF, XGR) [43,44,45,46,47,48,49]. In the leave-one-out method, the number of each test set is one, at which time R M S E = M A E is used. When validating with all samples, R M S E and M A E are also used as evaluation metrics for the model.
Figure 10 and Figure 11, respectively, show the scores of each model on metric R M S E / M A E under leave-one-out cross-validation and full-sample validation. Figure 10 indicates that under leave-one-out cross-validation, the R M S E / M A E values obtained for the five regression models are: RR (0.12525), LR (0.12642), SVR (0.1370), RF (0.12536), and XGR (0.07296). Among them, the XGR model has the lowest score for R M S E / M A E , indicating the best fitting effect for dynamic brightness temperature thresholds. The results displayed in Figure 11 draw a similar conclusion. Among the five machine learning models under full-sample validation, the XGR model shows the best fitting effect, with its values for R M S E and M A E being the lowest among all models. Therefore, in the subsequent experiments, this paper will utilize the XGR model to determine the dynamic brightness temperature thresholds for different regions.

4.4. Wildfire Smoke Recognition Results Combining Smoke Semantic Segmentation and Regional Dynamic Brightness Temperature Threshold

The smoke areas segmented by the deep learning model will be combined with the regional brightness temperature thresholds identified by the machine learning model to determine if the smoke in the remote sensing images is indeed from a wildfire. The smoke segmentation images will be mapped at the pixel level with the regional brightness temperature images. If there are anomalous points exceeding the brightness temperature threshold within the smoke area, it can be determined that the area indeed contains wildfire smoke; otherwise, it does not. Figure 12 displays the detection effect of wildfire smoke combined with dynamic brightness temperature thresholds. The left image shows the smoke area segmented by the Mask_R-CNN_ResNet50_FPN, while the right image displays points exceeding the brightness temperature threshold in the region identified by the XGR model. Overlaying the two can more accurately determine whether the smoke is from an actual fire, effectively eliminating the influence of clouds and other interferences on wildfire smoke detection.
In Figure 12a, the white area in the left image represents the smoke area obtained from semantic segmentation, and the white points in the right image are the anomalous points exceeding the regional brightness temperature threshold. Overlaying the two images, it is evident that the two areas have an overlap of 11 anomalous brightness temperature pixels. This indicates that a fire must have occurred in the area, and the smoke in the image is indeed from a real wildfire.
Similarly, the process can determine that the smoke identified in Figure 12b is also from a wildfire, as anomalous brightness temperature points appear in that area. In the smoke area shown in Figure 12c, no anomalous brightness temperature points are present, leading to the conclusion that the area does not contain wildfire smoke and is likely a misidentification of clouds or other interferences.
Figure 13 presents the effect of smoke recognition before and after the integration of brightness temperature thresholds, using P r e c i s i o n s m o k e as the evaluation metric. SFN represents our proposed FSF Net, while SN refers to a smoke detection model that solely uses image segmentation. It is evident that after incorporating brightness temperature thresholds, the accuracy of smoke recognition increased from 85.70% to 89.12%, an improvement of 3.42%. This indicates that adding an examination of whether there are anomalous brightness temperature points in the smoke area helps to eliminate the influence of clouds and other interferences, effectively enhancing the precision of smoke recognition.

4.5. Comparison with Advanced Models

To verify the relevance of the FSF Net forest fire detection model and whether its accuracy in recognizing fire smoke can be further improved after excluding cloud interference, a comparison was conducted. This comparison involved the Yolov9 [50] and DEtection Transformer (Rt-Detr) [51] models on the MODIS_Smoke_FPT dataset for smoke detection performance, compared with the model proposed in this paper. As shown in Figure 14, the precision rates of Yolov9, Rt-Detr, and FSF Net are 0.8326, 0.8259, and 0.8912, respectively. Clearly, the FSF Net model significantly outperforms the other two models in smoke detection accuracy. This demonstrates that introducing dynamic brightness temperature thresholds in forest fire smoke detection not only effectively reduces cloud interference but also enhances the accuracy of smoke detection.

5. Discussion

The visualization results of wildfire smoke detection indicate that the wildfire smoke detection model proposed in this paper, Forest Smoke-Fire Net (FSF Net), achieves very ideal effects in detecting wildfire smoke. The model combines the tasks of semantic segmentation of remote sensing images with the identification of dynamic fire spot brightness temperature thresholds, complementing the features obtained from both tasks. It effectively eliminates the interference of clouds and other elements in remote sensing images, enhancing the accuracy of wildfire smoke detection. In previous extensive spatiotemporal constraint wildfire smoke detection studies, related research often treated these two aspects as independent parts, either identifying wildfire smoke in remote sensing images solely based on deep semantic segmentation technology, or solely based on machine learning technology by exploring regional environmental information [18,19,20,21]. The method combination comparison table is shown in Table 4.
These independent studies could only provide unilateral information for wildfire smoke detection, leading to frequent false positives and false negatives in long-duration, wide-area wildfire smoke detection. Our experiments verify that integrating these two tasks can effectively complement wildfire characteristics. Semantic segmentation of remote sensing images provides a baseline for wildfire smoke detection areas, and regional brightness temperature data helps eliminate the influence of clouds and other interferences. The organic integration of these two tasks effectively enhances the model’s ability to recognize real wildfire smoke.
However, due to the traditional separation of the two tasks, there was no dataset combining both tasks in practice [22,23,24]. This prompted us to collect and integrate various wildfire data, forming the MODIS_Smoke_FPT dataset that includes both research tasks. This dataset not only contains the true color images processed from remote sensing images but also includes the brightness temperature information, elevation information, and meteorological information related to wildfire locations, providing a fundamental data guarantee for exploring the effects of combining the two tasks.
In terms of integrating machine learning and deep learning, this paper maps and overlays the dynamic fire spot brightness temperature maps (fire spot location maps) found by machine learning models with wildfire smoke images identified by deep learning. Through binary image comparison for overlap, if the fire occurs below the smoke area, this part can be further confirmed as wildfire smoke. Moreover, if the wildfire area and the smoke are far apart, or if there are no wildfire areas and high-temperature areas below the identified smoke image, we consider it a false image, meaning this part is not wildfire smoke. Additionally, in cases where multiple smoke plumes exist in the same image, often involving a mix of smoke and clouds, it makes the identification of wildfire smoke extremely difficult. In such cases, if the wildfire area is concentrated, the wildfire smoke is mostly distributed near and above the wildfire area. If the wildfire area is dispersed, regional wildfire smoke detection can be followed, segmenting discontinuous fire areas into independent blocks for further wildfire smoke detection. Due to the unique properties of the MODIS satellite sensors we selected, data are collected over large areas. Therefore, a single wildfire smoke image contains a vast area. Moreover, under the detection nature of long duration and wide areas, smoke dispersion tends to gradually dissipate with distance. However, identifying smoke that is nearly dissipating is not significantly meaningful for the timely discovery of wildfires. Therefore, we exclude cases where the wildfire area and the smoke are far apart. Specifically, in long-duration, wide-area settings, aggregative smoke lingers near and above the fire area and is rarely found far from the fire area.
In this approach, the detection of smoke and brightness temperature thresholds is no longer conducted independently. Instead, by comprehensively utilizing the information provided by both, the accuracy of smoke detection or wildfire detection is enhanced. This strategy of combining the semantic segmentation of wildfire smoke with regional brightness temperature information allows for the full integration and utilization of remote sensing images and actual regional environmental information. This provides more valuable feature information for wildfire smoke detection, improving the detection outcomes and further enhancing the model’s generalizability and robustness.
Moreover, besides MODIS data, the model proposed in this paper can also be applied to sensor data from other satellites, such as the Himawari-8, Landsat-8, and the Gaofen-4 (GF-4) satellite, among others. For example, the Himawari-8 satellite can use channels 3, 2, and 1 to create true color images. The Landsat-8 satellite can use channels 4, 3, and 2 for true color imaging. The Gaofen-4 satellite can produce true color images using red, green, and blue channels. In terms of fire point threshold calculation and inversion, the Himawari-8 satellite can detect high-temperature features on the ground using channels 7 and 14, commonly used for fire point detection. Landsat-8 can specifically use channels 10 and 11 to measure ground surface temperatures, particularly effective in detecting and monitoring fire hotspots. The Gaofen-4 satellite uses infrared channels to detect the locations of forest fire hotspots.
Additionally, since MODIS operates from a polar orbit, the spatial features provided by its remote sensing data are limited. In the future, geostationary satellites could be considered, to delve deeper into the spatiotemporal information of wildfire smoke, further exploring the dynamic diffusion properties of smoke and the timeliness of wildfire smoke within fixed areas, offering valuable support for the sustainable development of forest ecosystems. Simultaneously, this paper encourages researchers engaged in remote sensing and wildfire detection tasks to continue exploring and expanding the MODIS_Smoke_FPT dataset.

6. Conclusions

This paper proposes a wide-area wildfire smoke detection model, Forest Smoke-Fire Net (FSF Net), that combines the task of semantic segmentation of remote sensing images with the task of inverting fire spot brightness temperature thresholds. This is to effectively address the issue of misidentifications or missed detections caused by the difficulty of distinguishing smoke from clouds and other interferences in complex scenes. By collecting hundreds of remote sensing images of major wildfires globally and integrating corresponding meteorological, elevation, and fire spot brightness temperature information, a wildfire smoke dataset suitable for the fusion task (MODIS_Smoke_FPT) was constructed. Utilizing the deep learning model Mask_R-CNN_ResNet50_FPN, the segmentation of smoke areas in remote sensing images is realized, serving as the foundational area for ultimately identifying real smoke. With the help of the XGR model, the adaptive inversion of brightness temperature thresholds in smoke areas is achieved, thereby obtaining the distribution of anomalous brightness temperature pixels in different smoke areas. The derived brightness temperature thresholds for different smoke areas vary because different smoke areas have distinct environmental and climatic characteristics, leading to different minimum temperature thresholds for fire spots. By overlaying the obtained smoke segmentation map with the anomalous brightness temperature area map and matching them, it can be determined whether real wildfire smoke has occurred. The experimental results demonstrate that the model proposed in this paper effectively improves the accuracy of wildfire smoke detection, having significant research importance for reducing cloud misidentification problems and enhancing the precision of smoke detection. In the future, further exploration into the spatiotemporal information and inherent characteristics of wildfire smoke will continue, aiming to enhance the effectiveness of wildfire smoke detection and provide more valuable support for forest protection.

Author Contributions

Conceptualization, M.W.; methodology, Y.D. and Y.F.; formal analysis, M.W.; investigation, Y.D.; resources, M.W.; data curation, Y.D. and Q.W.; writing—original draft preparation, Y.D.; writing—review and editing, Y.D.; supervision, M.W.; funding acquisition, M.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

We have made the collected and organized MODIS_Smoke_FPT data set public, and the data can be downloaded through https://pan.baidu.com/s/1W4i-E9E92qqySgejfZ3E1Q?pwd=tvpw (accessed on 10 February 2024).

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

In this paper, the term “fire spot threshold” refers to the threshold temperature of a fire at the time of occurrence. In previous fire spot detection algorithms, a fixed threshold criterion was used where a point could be identified as a fire spot if the surface brightness temperature exceeded 325K. However, this threshold is not absolute and may be adjusted in actual applications, as fire spot detection depends not only on brightness temperature but may also involve other factors, such as background temperature, land surface type, and other environmental variables.
Based on this, the “fire spot threshold” we use refers to the method of calculating dynamic fire spot thresholds from our previous work [32].

Appendix B

Table A1. MODIS HDF file exports 36-channel source code.
Table A1. MODIS HDF file exports 36-channel source code.
Source Code:
import os
import numpy as np
from pyhdf.SD import SD, SDC

HDF_FILR_URL = r”demo.hdf”
file = SD(HDF_FILR_URL)

datasets_dic = file.datasets()
for idx,sds in enumerate(datasets_dic.keys()):
print(idx,sds)
sds_obj = file.select(idx)
print(sds_obj.attributes())
print(sds_obj.dimensions())
print(‘----------------------------------------------------------------------’)

# Latitude = 0
sds_obj = file.select(0)
latitude = sds_obj.get()

# Longitude = 1
sds_obj = file.select(1)
longitude = sds_obj.get()

# EV_1KM_Emissive = 4
sds_obj = file.select(4)
data = sds_obj.get()
Band20_DN = data[0,:]
Band21_DN = data[1,:]
Band22_DN = data[2,:]
Band23_DN = data[3,:]
Band24_DN = data[4,:]
Band25_DN = data[5,:]
Band27_DN = data[6,:]
Band28_DN = data[7,:]
Band29_DN = data[8,:]
Band30_DN = data[9,:]
Band31_DN = data[10,:]
Band32_DN = data[11,:]
Band33_DN = data[12,:]
Band34_DN = data[13,:]
Band35_DN = data[14,:]
Band36_DN = data[15,:]

# EV_1KM_RefSB = 2
sds_obj = file.select(2)
data = sds_obj.get()
Band8_DN = data[0,:]
Band9_DN = data[1,:]
Band10_DN = data[2,:]
Band11_DN = data[3,:]
Band12_DN = data[4,:]
Band13lo_DN = data[5,:]
Band13hi_DN = data[6,:]
Band14lo_DN = data[7,:]
Band14hi_DN = data[8,:]
Band15_DN = data[9,:]
Band16_DN = data[10,:]
Band17_DN = data[11,:]
Band18_DN = data[12,:]
Band19_DN = data[13,:]
Band26_DN = data[14,:]

# EV_250_Aggr1km_RefSB = 6
sds_obj = file.select(6)
data = sds_obj.get()
Band1_DN = data[0,:]
Band2_DN = data[1,:]

# EV_500_Aggr1km_RefSB = 9
sds_obj = file.select(9)
data = sds_obj.get()
Band3_DN = data[0,:]
Band4_DN = data[1,:]
Band5_DN = data[2,:]
Band6_DN = data[3,:]
Band7_DN = data[4,:]

References

  1. Ryu, J.-H.; Han, K.-S.; Hong, S.; Park, N.-W.; Lee, Y.-W.; Cho, J. Satellite-Based Evaluation of the Post-Fire Recovery Process from the Worst Forest Fire Case in South Korea. Remote Sens. 2018, 10, 918. [Google Scholar] [CrossRef]
  2. Assessment, C. Fourth National Climate Assessment; U.S. Global Change Research Program: Washington, DC, USA, 2018.
  3. Jodhani, K.H.; Patel, H.; Soni, U.; Patel, R.; Valodara, B.; Gupta, N.; Patel, A.; Omar, P.J. Assessment of forest fire severity and land surface temperature using Google Earth Engine: A case study of Gujarat State, India. Fire Ecol. 2024, 20, 23. [Google Scholar] [CrossRef]
  4. Park, M.; Tran, D.Q.; Jung, D.; Park, S. Wildfire-Detection Method Using DenseNet and CycleGAN Data Augmentation-Based Remote Camera Imagery. Remote Sens. 2020, 12, 3715. [Google Scholar] [CrossRef]
  5. Govil, K.; Welch, M.L.; Ball, J.T.; Pennypacker, C.R. Preliminary Results from a Wildfire Detection System Using Deep Learning on Remote Camera Images. Remote Sens. 2020, 12, 166. [Google Scholar] [CrossRef]
  6. Mukhiddinov, M.; Abdusalomov, A.B.; Cho, J. A Wildfire Smoke Detection System Using Unmanned Aerial Vehicle Images Based on the Optimized YOLOv5. Sensors 2022, 22, 9384. [Google Scholar] [CrossRef] [PubMed]
  7. Xu, G.; Zhang, Q.; Liu, D.; Lin, G.; Wang, J.; Zhang, Y. Adversarial adaptation from synthesis to reality in fast detector for smoke detection. IEEE Access 2019, 7, 29471–29483. [Google Scholar] [CrossRef]
  8. Randriambelo, T.; Baldy, S.; Bessafi, M.; Petit, M.; Despinoy, M. An improved detection and characterization of active fires and smoke plumes in south-eastern Africa and Madagascar. Int. J. Remote Sens. 1998, 19, 2623–2638. [Google Scholar] [CrossRef]
  9. Twohy, C.H.; Toohey, D.W.; Levin, E.J.T.; DeMott, P.J.; Rainwater, B.; Garofalo, L.A.; Pothier, M.A.; Farmer, D.K.; Kreidenweis, S.A.; Pokhrel, R.P.; et al. Biomass burning smoke its influence on clouds over the western U.S. Geophys. Res. Lett. 2021, 48, e2021GL094224. [Google Scholar] [CrossRef]
  10. Anwar, M.N.; Shabbir, M.; Tahir, E.; Iftikhar, M.; Saif, H.; Tahir, A.; Murtaza, M.A.; Khokhar, M.F.; Rehan, M.; Aghbashlo, M.; et al. Emerging challenges of air pollution and particulate matter in China, India, and Pakistan and mitigating solutions. J. Hazard. Mater. 2021, 416, 125851. [Google Scholar] [CrossRef]
  11. Kadir, E.A.; Rosa, S.L.; Syukur, A.; Othman, M.; Daud, H. Forest fire spreading and carbon concentration identification in tropical region Indonesia. Alex. Eng. J. 2022, 61, 1551–1561. [Google Scholar] [CrossRef]
  12. Sun, X.; Sun, L.; Huang, Y. Forest fire smoke recognition based on convolutional neural network. J. For. Res. 2021, 32, 1921–1927. [Google Scholar] [CrossRef]
  13. Qiang, X.; Zhou, G.; Chen, A.; Zhang, X.; Zhang, W. Forest fire smoke detection under complex backgrounds using TRPCA and TSVB. Int. J. Wildland Fire 2021, 30, 329–350. [Google Scholar] [CrossRef]
  14. Wooster, M.J.; Roberts, G.J.; Giglio, L.; Roy, D.P.; Freeborn, P.H.; Boschetti, L.; Justice, C.; Ichoku, C.; Schroeder, W.; Davies, D.; et al. Satellite remote sensing of active fires: History and current status, applications and future requirements. Remote Sens. Environ. 2021, 267, 112694. [Google Scholar] [CrossRef]
  15. Pontes-Lopes, A.; Dalagnol, R.; Dutra, A.C.; de Jesus Silva, C.V.; de Alencastro Graça, P.M.L.; de Oliveira e Cruz de Aragão, L.E. Quantifying Post-Fire Changes in the Aboveground Biomass of an Amazonian Forest Based on Field and Remote Sensing Data. Remote Sens. 2022, 14, 1545. [Google Scholar] [CrossRef]
  16. Zhang, Q.; Lin, G.; Zhang, Y.; Xu, G.; Wang, J. Wildland forest fire smoke detection based on faster R-CNN using synthetic smoke images. Procedia Eng. 2018, 211, 441–446. [Google Scholar] [CrossRef]
  17. Yuan, C.; Liu, Z.; Zhang, Y. Learning-based smoke detection for unmanned aerial vehicles applied to forest fire surveillance. J. Intell. Robot. Syst. 2019, 93, 337–349. [Google Scholar] [CrossRef]
  18. Huang, J.; Zhou, J.; Yang, H.; Liu, Y.; Liu, H. A Small-Target Forest Fire Smoke Detection Model Based on Deformable Transformer for End-to-End Object Detection. Forests 2023, 14, 162. [Google Scholar] [CrossRef]
  19. Li, J.; Xu, R.; Liu, Y. An improved forest fire and smoke detection model based on yolov5. Forests 2023, 14, 833. [Google Scholar] [CrossRef]
  20. Ba, R.; Chen, C.; Yuan, J.; Song, W.; Lo, S. SmokeNet: Satellite Smoke Scene Detection Using Convolutional Neural Network with Spatial and Channel-Wise Attention. Remote Sens. 2019, 11, 1702. [Google Scholar] [CrossRef]
  21. Li, X.; Song, W.; Lian, L.; Wei, X. Forest Fire Smoke Detection Using Back-Propagation Neural Network Based on MODIS Data. Remote Sens. 2015, 7, 4473–4498. [Google Scholar] [CrossRef]
  22. Dewangan, A.; Pande, Y.; Braun, H.-W.; Vernon, F.; Perez, I.; Altintas, I.; Cottrell, G.W.; Nguyen, M.H. FIgLib & SmokeyNet: Dataset and Deep Learning Model for Real-Time Wildland Fire Smoke Detection. Remote Sens. 2022, 14, 1007. [Google Scholar] [CrossRef]
  23. Liu, H.; Li, J.; Du, J.; Zhao, B.; Hu, Y.; Li, D.; Yu, W. Identification of Smoke from Straw Burning in Remote Sensing Images with the Improved YOLOv5s Algorithm. Atmosphere 2022, 13, 925. [Google Scholar] [CrossRef]
  24. Nelson, D.L.; Chen, Y.; Kahn, R.A.; David, J.D.; Mazzoni, D. Example applications of the MISR INteractive eXplorer (MINX) software tool to wildfire smoke plume analyses. Remote Sens. Fire Sci. Appl. SPIE 2008, 7089, 65–75. [Google Scholar]
  25. Sicard, M.; Granados-Muñoz, M.J.; Alados-Arboledas, L.; Bedoya-Velásquez, A.E.; Benavent-Oltra, J.A.; Bortoli, D.; Comerón, A.; Córdoba-Jabonero, C.; Costa, M.J.; del Águila, A.; et al. Ground/space, passive/active remote sensing observations coupled with particle dispersion modelling to understand the inter-continental transport of wildfire smoke plumes. Remote Sens. Environ. 2019, 232, 111294. [Google Scholar] [CrossRef]
  26. Hess, L.L.; Novo, E.; Slaymaker, D.M.; Holt, J.; Steffen, C.; Valeriano, D.M.; Mertes, K.A.L.; Krug, T.; Melack, J.M.; Gastil, M.; et al. Geocoded digital videography for validation of land cover mapping in the Amazon basin. Int. J. Remote Sens. 2002, 23, 1527–1555. [Google Scholar] [CrossRef]
  27. Yuan, F.; Li, K.; Wang, C.; Fang, Z. A lightweight network for smoke semantic segmentation. Pattern Recognit. 2023, 137, 109289. [Google Scholar] [CrossRef]
  28. Miao, S.; Lin, H.; Gao, H.; Dong, L. Strip smoke and cloud recognition in satellite image. In Proceedings of the 2016 9th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI), Datong, China, 15–17 October 2016; pp. 303–307. [Google Scholar]
  29. Sokolik, I.N.; Soja, A.J.; DeMott, P.J.; Winker, D. Progress and challenges in quantifying wildfire smoke emissions, their properties, transport, and atmospheric impacts. J. Geophys. Res. Atmos. 2019, 124, 13005–13025. [Google Scholar] [CrossRef]
  30. Lu, Z.; Sokolik, I.N. The effect of smoke emission amount on changes in cloud properties and precipitation: A case study of Canadian boreal wildfires of 2007. J. Geophys. Res. Atmos. 2013, 118, 11777–11793. [Google Scholar] [CrossRef]
  31. Yu, P.; Davis, S.M.; Toon, O.B.; Portmann, R.W.; Bardeen, C.G.; Barnes, J.E.; Telg, H.; Maloney, C.; Rosenlof, K.H. Persistent stratospheric warming due to 2019–2020 Australian wildfire smoke. Geophys. Res. Lett. 2021, 48, e2021GL092609. [Google Scholar] [CrossRef]
  32. Ding, Y.; Wang, M.; Fu, Y.; Zhang, L.; Wang, X. A Wildfire Detection Algorithm Based on the Dynamic Brightness Temperature Threshold. Forests 2023, 14, 477. [Google Scholar] [CrossRef]
  33. Miao, S.; Xia, M.; Qian, M.; Zhang, Y.; Liu, J.; Lin, H. Cloud/shadow segmentation based on multi-level feature enhanced network for remote sensing imagery. Int. J. Remote Sens. 2022, 43, 5940–5960. [Google Scholar] [CrossRef]
  34. Bu, X.; Liu, K.; Liu, J.; Ding, Y. A Harmful Algal Bloom Detection Model Combining Moderate Resolution Imaging Spectroradiometer Multi-Factor and Meteorological Heterogeneous Data. Sustainability 2023, 15, 15386. [Google Scholar] [CrossRef]
  35. Chen, C.; Yuan, G.; Zhou, H.; Ma, Y.; Eng, M.M.B. Optimized YOLOv7-tiny model for smoke detection in power transmission lines. Math. Biosci. Eng. 2023, 20, 19300–19319. [Google Scholar] [CrossRef] [PubMed]
  36. Maris, M.; Romelli, E.; Tomasi, M.; Gregorio, A.; Gregorio, A.; Sandri, M.; Galeotta, S.; Tavagnacco, D.; Frailis, M.; Maggio, G.; et al. Revised planet brightness temperatures using the Planck/LFI 2018 data release. Astron. Astrophys. 2021, 647, A104. [Google Scholar] [CrossRef]
  37. Chen, H.; Meng, X.; Li, L.; Ni, K. Quality Assessment of FY-3D/MERSI-II Thermal Infrared Brightness Temperature Data from the Arctic Region: Application to Ice Surface Temperature Inversion. Remote Sens. 2022, 14, 6392. [Google Scholar] [CrossRef]
  38. Chen, L.; Gao, J.; Lopes, A.M.; Zhang, Z.; Chu, Z.; Wu, R. Adaptive fractional-order genetic-particle swarm optimization Otsu algorithm for image segmentation. Appl. Intell. 2023, 53, 26949–26966. [Google Scholar] [CrossRef]
  39. Otsu, N. A threshold selection method from gray-level histograms. Automatica 1975, 11, 23–27. [Google Scholar] [CrossRef]
  40. He, K.; Gkioxari, G.; Dollár, P.; Girshick, R.; Facebook AI Research. Mask R-CNN. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2961–2969. [Google Scholar]
  41. Wang, T.; Zhang, K.; Zhang, W.; Wang, R.; Wan, S.; Rao, Y.; Jiang, Z.; Gu, L. Tea picking point detection location based on Mask-RCNN. Inf. Process. Agric. 2023, 10, 267–275. [Google Scholar] [CrossRef]
  42. Bi, X.; Hu, J.; Xiao, B.; Li, W.; Gao, X. IEMask R-CNN: Information-enhanced mask R-CNN. IEEE Trans. Big Data 2022, 9, 688–700. [Google Scholar] [CrossRef]
  43. Wong, T.T. Performance evaluation of classification algorithms by k-fold and leave-one-out cross validation. Pattern Recognit. 2015, 48, 2839–2846. [Google Scholar] [CrossRef]
  44. Chen, T.; Guestrin, C. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, New York, NY, USA, 13–17 August 2016; pp. 785–794. [Google Scholar]
  45. Rigatti, S.J. Random forest. J. Insur. Med. 2017, 47, 31–39. [Google Scholar] [CrossRef] [PubMed]
  46. Tsigler, A.; Bartlett, P.L. Benign overfitting in ridge regression. J. Mach. Learn. Res. 2023, 24, 1–76. [Google Scholar]
  47. Singha, C.; Swain, K.C.; Moghimi, A.; Foroughnia, F.; Swain, S.K. Integrating geospatial, remote sensing, and machine learning for climate-induced forest fire susceptibility mapping in Similipal Tiger Reserve, India. For. Ecol. Manag. 2024, 555, 121729. [Google Scholar] [CrossRef]
  48. Rahmatinejad, Z.; Dehghani, T.; Hoseini, B.; Rahmatinejad, F.; Lotfata, A.; Reihani, H.; Eslami, S. A comparative study of explainable ensemble learning and logistic regression for predicting in-hospital mortality in the emergency department. Sci. Rep. 2024, 14, 3406. [Google Scholar] [CrossRef] [PubMed]
  49. Niazkar, M.; Menapace, A.; Brentan, B.; Piraei, R.; Jimenez, D.; Dhawan, P.; Righetti, M. Applications of XGBoost in water resources engineering: A systematic literature review (Dec 2018–May 2023). Environ. Model. Softw. 2024, 174, 105971. [Google Scholar] [CrossRef]
  50. Wang, C.Y.; Yeh, I.H.; Liao, H.Y.M. YOLOv9: Learning What You Want to Learn Using Pro-grammable Gradient Information. arXiv 2024, arXiv:2402.13616. [Google Scholar]
  51. Zhao, Y.; Lv, W.; Xu, S.; Wei, J.; Wang, G.; Dang, Q.; Liu, Y.; Chen, J. Detrs beat yolos on real-time object detection. arXiv 2023, arXiv:2304.08069. [Google Scholar]
Figure 1. The distribution of forest fires around the world, including the red five-pointed stars where wildfires occur.
Figure 1. The distribution of forest fires around the world, including the red five-pointed stars where wildfires occur.
Forests 15 00839 g001
Figure 2. Example of forest fire smoke image with smoke and clouds.
Figure 2. Example of forest fire smoke image with smoke and clouds.
Forests 15 00839 g002
Figure 4. Schematic diagram of the overall architecture of the FSF Net model.
Figure 4. Schematic diagram of the overall architecture of the FSF Net model.
Forests 15 00839 g004
Figure 5. Mask R-CNN overall framework diagram.
Figure 5. Mask R-CNN overall framework diagram.
Forests 15 00839 g005
Figure 6. Binarized map of abnormal brightness temperature area. (a) Indicates the brightness temperature region of the aggregated fire point. (b) Indicates the brightness temperature area of mildly dispersed fire points. Among them, white represents the abnormal brightness temperature area, that is, the fire point area, and black represents the area without abnormal brightness temperature.
Figure 6. Binarized map of abnormal brightness temperature area. (a) Indicates the brightness temperature region of the aggregated fire point. (b) Indicates the brightness temperature area of mildly dispersed fire points. Among them, white represents the abnormal brightness temperature area, that is, the fire point area, and black represents the area without abnormal brightness temperature.
Forests 15 00839 g006
Figure 7. Model accuracy for wildfire smoke segmentation detection. Red represents m A P 50 , and blue represents m A P 75 . MRR50F, MRR101F, CMRR50F, CMRR101F correspond to Mask_R-CNN_ResNet50_FPN, Mask_R-CNN_ResNet101_FPN, Cascade_Mask_R-CNN_ResNet50_FPN, and Cascade_Mask_R-CNN_ResNet101_FPN, respectively.
Figure 7. Model accuracy for wildfire smoke segmentation detection. Red represents m A P 50 , and blue represents m A P 75 . MRR50F, MRR101F, CMRR50F, CMRR101F correspond to Mask_R-CNN_ResNet50_FPN, Mask_R-CNN_ResNet101_FPN, Cascade_Mask_R-CNN_ResNet50_FPN, and Cascade_Mask_R-CNN_ResNet101_FPN, respectively.
Forests 15 00839 g007
Figure 8. Loss trend chart of Mask_R-CNN_ResNet50_FPN.
Figure 8. Loss trend chart of Mask_R-CNN_ResNet50_FPN.
Forests 15 00839 g008
Figure 9. Visualization of wildfire smoke semantic segmentation results. Group (A) is a single smoke situation, in which the distinction between smoke and clouds is more obvious. Group (B) is a smoky and cloudy situation. Group (C) is the case where smoke and clouds are mixed.
Figure 9. Visualization of wildfire smoke semantic segmentation results. Group (A) is a single smoke situation, in which the distinction between smoke and clouds is more obvious. Group (B) is a smoky and cloudy situation. Group (C) is the case where smoke and clouds are mixed.
Forests 15 00839 g009
Figure 10. R M S E / M A E scores using leave-one-out cross-validation.
Figure 10. R M S E / M A E scores using leave-one-out cross-validation.
Forests 15 00839 g010
Figure 11. Validated R M S E / M A E scores using all data.
Figure 11. Validated R M S E / M A E scores using all data.
Forests 15 00839 g011
Figure 12. Smoke detection effect diagram of wildfire smoke detection model combined with dynamic fire point threshold. (a) shows the situation where there is a fire point at the root of the scattered smoke. (b) shows the situation where there is a fire point under the concentrated smoke. (c) shows the situation where the fire point and the “smoke” are far away, that is, the cloud is excluded.
Figure 12. Smoke detection effect diagram of wildfire smoke detection model combined with dynamic fire point threshold. (a) shows the situation where there is a fire point at the root of the scattered smoke. (b) shows the situation where there is a fire point under the concentrated smoke. (c) shows the situation where the fire point and the “smoke” are far away, that is, the cloud is excluded.
Forests 15 00839 g012
Figure 13. Figure (a) shows a comparison of the accuracy between the wildfire smoke detection model that integrates dynamic fire point thresholds and the smoke detection model that uses images alone. Figure (b) presents a schematic diagram of fire smoke detection and cloud discrimination.
Figure 13. Figure (a) shows a comparison of the accuracy between the wildfire smoke detection model that integrates dynamic fire point thresholds and the smoke detection model that uses images alone. Figure (b) presents a schematic diagram of fire smoke detection and cloud discrimination.
Forests 15 00839 g013
Figure 14. Comparison of smoke detection performance on the MODIS_Smoke_FPT dataset among three models.
Figure 14. Comparison of smoke detection performance on the MODIS_Smoke_FPT dataset among three models.
Forests 15 00839 g014
Table 1. Characteristics of MODIS sensor bands 1, 3, and 4.
Table 1. Characteristics of MODIS sensor bands 1, 3, and 4.
AisleWavelength (nm)Spectral CharacteristicsMain Application
1620–670red lightland/cloud boundary
3459–479Blu-rayLand/Cloud Properties
4545–565green light
Table 2. Elevation information and meteorological information.
Table 2. Elevation information and meteorological information.
Serial NumberOther Information Names
1high
2average temperature
3dew point temperature
4air pressure
5visibility
6average wind speed
7maximum sustained wind speed
8maximum temperature
9minimum temperature
Table 3. Experimental model.
Table 3. Experimental model.
ModelBackbone NetworkFeature Pyramid
Mask_R-CNNResNet50FPN
ResNet101FPN
Cascade_MaskResNet50FPN
ResNet101FPN
Table 4. Comparative Study on Forest Fire Smoke Detection.
Table 4. Comparative Study on Forest Fire Smoke Detection.
Reference NumberSmoke Detection AccuracyCan Smoke Be Distinguished from Clouds?Is the Integration of Fire Information Used to Comprehensively Judge Wildfire Smoke?
[18]0.8840NONO
[19]0.6040NONO
[20]0.9275YESNO
[21]0.9763YESNO
Our0.8912YESYES
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ding, Y.; Wang, M.; Fu, Y.; Wang, Q. Forest Smoke-Fire Net (FSF Net): A Wildfire Smoke Detection Model That Combines MODIS Remote Sensing Images with Regional Dynamic Brightness Temperature Thresholds. Forests 2024, 15, 839. https://doi.org/10.3390/f15050839

AMA Style

Ding Y, Wang M, Fu Y, Wang Q. Forest Smoke-Fire Net (FSF Net): A Wildfire Smoke Detection Model That Combines MODIS Remote Sensing Images with Regional Dynamic Brightness Temperature Thresholds. Forests. 2024; 15(5):839. https://doi.org/10.3390/f15050839

Chicago/Turabian Style

Ding, Yunhong, Mingyang Wang, Yujia Fu, and Qian Wang. 2024. "Forest Smoke-Fire Net (FSF Net): A Wildfire Smoke Detection Model That Combines MODIS Remote Sensing Images with Regional Dynamic Brightness Temperature Thresholds" Forests 15, no. 5: 839. https://doi.org/10.3390/f15050839

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop