Food loss and waste (FLW) is a serious issue in the global food system, including in the banana commodity sector. One of the main challenges lies in the manual fruit sorting process, which is prone to errors and inconsistencies due to human subjectivity. Although various deep learning approaches have been applied to fruit ripeness classification, most previous studies still rely on earlier versions of the YOLO model or conventional CNNs, which are limited in handling visual variations and detecting small objects in real-time. This study proposes the application of the YOLOv11 algorithm, a state-of-the-art deep learning model in computer vision, to automate the visual classification of banana ripeness levels. Leveraging YOLOv11's strengths in real-time object detection, the system is designed to categorize bananas into four ripeness classes. Experimental results show that the model achieved an mAP@0.5 of 0.835, with the highest precision of 0.934 and an average inference time of 63.8 milliseconds per image. Extreme classes such as unripe and overripe yielded high accuracy, while transitional classes experienced performance drops due to visual similarity. This approach is expected to support food loss reduction, improve sorting efficiency, and enhance the competitiveness of horticultural products in both domestic and export markets.
Copyrights © 2025