Jurnal Teknik Informatika C.I.T. Medicom
Vol 17 No 6 (2026): Computer Science

Transparency Analysis of Deep Learning Models in Medical Data Using SHAP and LIME

Arka Evander (1,2 Department of Computer Science, University of Luxembourg. Luxembourg)
Lyra Amara Quinn (Department of Computer Science, University of Luxembourg. Luxembourg)



Article Info

Publish Date
30 Jan 2026

Abstract

The increasing adoption of deep learning models in healthcare has significantly improved the accuracy of medical diagnosis and prediction; however, their lack of transparency remains a critical challenge. These models often operate as “black boxes,” making it difficult for healthcare professionals to understand the reasoning behind their predictions, which raises concerns regarding trust, safety, and ethical decision-making. This study aims to analyze the transparency of deep learning models applied to medical data by utilizing two widely used explainable artificial intelligence (XAI) techniques, namely SHAP (SHapley Additive Explanations) and LIME (Local Interpretable Model-Agnostic Explanations). A deep learning model was developed using medical datasets, including clinical (tabular) and/or medical imaging data, and evaluated using performance metrics such as accuracy, precision, recall, F1-score, and Area Under the Curve (AUC). To enhance interpretability, SHAP and LIME were applied to explain the model’s predictions at both global and local levels. The results indicate that the model achieves high predictive performance, with key features such as glucose level, age, blood pressure, and cholesterol significantly influencing predictions. The comparative analysis shows that SHAP provides more consistent, stable, and comprehensive explanations, making it more suitable for global interpretation and clinical decision support. In contrast, LIME offers simpler and more intuitive local explanations, which are useful for understanding individual predictions but may lack stability across samples. This study contributes to the advancement of explainable AI in healthcare by demonstrating how interpretability techniques can bridge the gap between high model performance and practical clinical applicability. Future research is recommended to explore more robust and scalable XAI approaches for real-world medical applications.

Copyrights © 2026






Journal Info

Abbrev

JTI

Publisher

Subject

Computer Science & IT

Description

The Jurnal Teknik Informatika C.I.T a scientific journal of Decision support sistem , expert system and artificial inteligens which includes scholarly writings on pure research and applied research in the field of information systems and information technology as well as a review-general review of ...