Diabetes, if not detected early, can lead to serious complications such as vision loss, known as diabetic retinopathy. Explainable Artificial Intelligence (XAI) can enhance traditional Machine Learning methods, which are not understandable and transparent in diagnostic tasks. This Systematic Literature Review explores data inputs that influence the performance of XAI models in detecting diabetic retinopathy, how XAI techniques can enhance early detection outcomes in diabetic retinopathy, the challenges in implementing these techniques and the ethical implications of using these models in clinical practice. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses approach guided the search in 4 databases, Springer, Science Direct, PubMed and IEEE Xplore. The findings reveal that XAI techniques like Local Interpretable Model-agnostic Explanations (LIME), SHapley Additive exPlanations (SHAP) and Gradient-weighted Class Activation Mapping (GRAD-CAM) offer opportunities like early detection outcomes, integration with existing clinical processes, enhancing trust in AI systems, improving accuracy and personalised treatment. XAI can also facilitate collaboration among clinicians, maintaining fairness in AI systems and supporting adherence to ethical standards. However, research on clinical validation of these models, as well as standardised performance evaluation metrics, is lacking.
Copyrights © 2025