Early and accurate detection of chili plant diseases is essential to support precision agriculture and minimize crop losses. Conventional visual inspection performed by farmers is often subjective and inconsistent, particularly under varying lighting conditions and complex field environments. Recent developments in deep learning, especially object detection models, enable the automation of disease identification with higher reliability. This study evaluates the performance of the YOLOv11 architecture for detecting three classes related to chili plant conditions—anthracnose, fruit fly, and healthy fruit—using a primary dataset of 1,062 field images collected in Karawang, Indonesia. The model was trained using a standardized configuration and compared with three widely used object detection models: YOLOv8, YOLOv5, and SSD. The training process was conducted for 100 epochs, with evaluation metrics including precision, recall, mAP50, mAP50–95, and inference time. Experimental results show that YOLOv11 achieved the highest detection performance, with an mAP50 of 86.94%, outperforming YOLOv8 by 3.8%, YOLOv5 by 6.8%, and SSD by 12.7%. The model also demonstrated the fastest inference speed at 10.9 ms, making it suitable for real-time field applications. Training analysis indicated stable convergence at the 61st epoch, supported by balanced precision (0.82391) and recall (0.77967) values as well as consistent reductions in both training and validation losses. These findings demonstrate that YOLOv11 provides more accurate and efficient detection of chili plant diseases compared with previous YOLO variants and SSD, and it offers strong potential for implementation in practical agricultural environments.
Copyrights © 2025