Lyra Amara Quinn
Department of Computer Science, University of Luxembourg. Luxembourg

Published : 1 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 1 Documents
Search

Transparency Analysis of Deep Learning Models in Medical Data Using SHAP and LIME Arka Evander; Lyra Amara Quinn
Jurnal Teknik Informatika C.I.T Medicom Vol 17 No 6 (2026): Computer Science
Publisher : Institute of Computer Science (IOCS)

Show Abstract | Download Original | Original Source | Check in Google Scholar

Abstract

The increasing adoption of deep learning models in healthcare has significantly improved the accuracy of medical diagnosis and prediction; however, their lack of transparency remains a critical challenge. These models often operate as “black boxes,” making it difficult for healthcare professionals to understand the reasoning behind their predictions, which raises concerns regarding trust, safety, and ethical decision-making. This study aims to analyze the transparency of deep learning models applied to medical data by utilizing two widely used explainable artificial intelligence (XAI) techniques, namely SHAP (SHapley Additive Explanations) and LIME (Local Interpretable Model-Agnostic Explanations). A deep learning model was developed using medical datasets, including clinical (tabular) and/or medical imaging data, and evaluated using performance metrics such as accuracy, precision, recall, F1-score, and Area Under the Curve (AUC). To enhance interpretability, SHAP and LIME were applied to explain the model’s predictions at both global and local levels. The results indicate that the model achieves high predictive performance, with key features such as glucose level, age, blood pressure, and cholesterol significantly influencing predictions. The comparative analysis shows that SHAP provides more consistent, stable, and comprehensive explanations, making it more suitable for global interpretation and clinical decision support. In contrast, LIME offers simpler and more intuitive local explanations, which are useful for understanding individual predictions but may lack stability across samples. This study contributes to the advancement of explainable AI in healthcare by demonstrating how interpretability techniques can bridge the gap between high model performance and practical clinical applicability. Future research is recommended to explore more robust and scalable XAI approaches for real-world medical applications.