Diabetic Retinopathy (DR) is a diabetes complication and the leading cause of blindness in the working-age population, affecting more than 93 million people worldwide. The quality of retinal fundus images is significantly affected by lighting conditions and contrast, making image preprocessing a critical factor that influences the accuracy of deep learning-based detection models. This study aims to compare the effect of four preprocessing techniques—original images, color Histogram Equalization (HE), grayscale HE, and Color Constancy—on the performance of a Convolutional Neural Network (CNN) based on the AlexNet architecture for DR detection. The APTOS 2019 Kaggle dataset was used, comprising 3,722 color retinal fundus images: 1,830 non-DR and 1,892 DR images. Model validation was performed using 10-Fold Cross Validation, and performance was evaluated using confusion matrix, ROC curve, accuracy, sensitivity, and specificity. Results show that original images yielded the best overall performance with accuracy of 96.10%, sensitivity of 97.98%, specificity of 94.29%, and AUC of 0.994. Grayscale HE produced the highest AUC (0.996), while Color Constancy had the lowest AUC (0.989). These findings indicate that color information in fundus images contains important discriminative features, and preprocessing does not always improve overall accuracy. The AlexNet model shows potential for implementation as a DR screening system based on Computer-Aided Diagnosis (CAD) with relatively low computational complexity
Copyrights © 2026