Image-based durian leaf disease detection presents challenges due to high visual similarity among symptoms and the limited, imbalanced dataset. This study compares three deep learning architectures VGG16, InceptionV3, and U-Net encoder-based—using transfer learning for classifying five durian leaf conditions. The dataset of 4,437 images underwent preprocessing, augmentation, and preliminary segmentation using U-Net to enhance focus on leaf regions. Fine-tuning was applied to the upper layers of each model to adapt feature representations to tropical leaf characteristics. The results indicate that InceptionV3 achieved the most stable and accurate performance with an accuracy of approximately 0.66, while VGG16 showed balanced results but was more prone to overfitting. U-Net proved effective for segmentation but less optimal as a classifier due to loss of small-scale lesion details. Overall, the findings demonstrate that combining U-Net segmentation with CNN-based transfer learning improves disease identification performance, particularly under limited data conditions.
Copyrights © 2026