This study aims to improve the accuracy of banana edibility detection using the YOLOv8 deep learning model. A total of 346 banana images were captured using a smartphone camera and split into training (303), validation (29), and testing (14) subsets. The research framework consisted of four main stages: data collection, preprocessing, model training, and performance evaluation. Preprocessing was conducted using the Roboflow platform and included several techniques such as image annotation, resizing, automatic orientation correction, contrast adjustment, and data augmentation through rotation, mosaic, and noise addition to enrich data variation and model robustness. The YOLOv8 model was trained for 60 epochs, achieving optimal convergence in 0.173 hours. Random search was utilized for hyperparameter optimization to achieve the best model configuration. The evaluation demonstrated remarkable results with a precision of 99.7%, recall of 100%, and mean Average Precision (mAP) of 99.5%. Visualization metrics, including the Precision-Confidence, Recall-Confidence, and F1-Confidence curves, each reached 100%, and the normalized confusion matrix demonstrated flawless classification performance. Testing on unseen data further confirmed the model’s ability to accurately detect and classify bananas into Good Quality and Bad Quality classes with high confidence scores. These findings highlight the capability of YOLOv8 as a robust and reliable model for automated fruit quality assessment. The implementation of this approach offers a non-destructive, fast, and consistent method for evaluating banana edibility, reducing dependency on manual inspection and human error. In addition, this study contributes to the advancement of smart agriculture and post-harvest management by demonstrating the potential of deep learning and computer vision to support real-time quality control and decision-making in the agricultural industry.