Forest fire mitigation requires an early detection system that is both fast and reliable. This study presents a real-time potential forest fire detection system based on UAV imagery using the YOLOv10 object detection model. The main objective is to enhance the accuracy of detecting fire and smoke in aerial imagery and to minimize false alarms through hyperparameter optimization and data balancing strategies. The dataset used was compiled from Roboflow Universe and Kaggle, consisting of two object classes: fire and smoke, with a slight class imbalance (1329 fire and 1024 smoke). In total, 1,691 annotated images were used, covering various lighting conditions, smoke densities, camera angles, and geographic backgrounds, and were divided into training, validation, and test sets with a ratio of approximately 75:15:10. To address the class imbalance and visual variability, data augmentation techniques such as rotation, flipping, brightness adjustment, and noise addition were applied, along with loss weighting to improve learning performance for the minority smoke class. Model training was conducted using 24 hyperparameter configurations combining six optimizers, two batch sizes, and two learning rates. The best hyperparameters are NAdam optimizer, batch_size 24, and learning_rate 0.001.The best performance of accuracy, precision, recall, F1-score, mean IoU, and mAP were achieved at 0.879, 0.8705, 0.8575, 0.863, 0.7373, and 0.870, respectively. Real-time testing using a DJI Mini 4 Pro UAV with RTMP livestream input demonstrated stable and responsive detection, displaying bounding boxes, class labels, confidence scores, and a “POTENTIAL FOREST FIRE” indicator when both fire and smoke were detected simultaneously. These findings confirm that integrating UAV and YOLOv10 technologies provides an effective and adaptive approach for real-time early detection of potential forest fires.
Copyrights © 2026