Forest and land fires pose significant environmental, economic, and public health challenges worldwide, particularly in regions with extensive forest coverage and prolonged dry seasons. Early and accurate detection is essential to mitigate damage and support rapid response efforts. This study proposes a deep learning–based approach for forest fire image classification using three prominent models: MobileNetV3, ResNet50, and YOLOv8. A curated dataset of forest fire images was employed, consisting of fire and non-fire scenes captured under diverse environmental conditions, including variations in illumination, smoke density, and background complexity. Prior to model training, all images underwent preprocessing steps such as resizing, normalization, and data augmentation to improve robustness and generalization. The performance of each model was evaluated using standard classification metrics, including accuracy, precision, recall, F1-score, Matthews Correlation Coefficient (MCC), and Cohen’s Kappa. Experimental results indicate that YOLOv8 achieved the best overall performance, with an accuracy of 0.952, precision of 0.9566, recall of 0.952, F1-score of 0.9519, MCC of 0.9412, and Kappa of 0.9400. ResNet50 demonstrated competitive performance with an accuracy of 0.940, slightly outperforming MobileNetV3, which achieved an accuracy of 0.938. The findings highlight that while lightweight architectures such as MobileNetV3 provide efficient performance suitable for resource-constrained environments, more advanced detection frameworks like YOLOv8 offer superior classification capability. Overall, this research demonstrates the effectiveness of modern deep learning models for automated forest fire image classification and supports their potential deployment in real-time early warning and environmental monitoring systems.
Copyrights © 2026