Claim Missing Document
Check
Articles

Found 1 Documents
Search

COMPARATIVE ANALYSIS OF EXPLAINABLE AI USING LIME AND SHAP FOR DIABETES PREDICTION BASED ON LIFESTYLE FACTORS Ricky Salim; Agung Mulyo Widodo
Journal of Golden Generation Multidisciplinary Vol. 2 No. 3 (2026): In Progress 2026: Journal of Golden Generation Multidisciplinary
Publisher : PT. Lembaga Penerbit Penelitian Nusantara

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.65244/jggm.v2i3.754

Abstract

The rapid advancement of artificial intelligence (AI) has significantly impacted the healthcare sector, particularly in supporting the early detection of diabetes; however, many AI models still face challenges due to their black-box nature, where decision-making processes are not easily understood. This study aims to compare two Explainable Artificial Intelligence (XAI) methods, namely Local Interpretable Model-Agnostic Explanations (LIME) and SHapley Additive Explanations (SHAP), in interpreting the prediction results of an Artificial Neural Network (ANN) model using the Diabetes Health Indicator dataset. Prior to modeling, the data were preprocessed through cleaning and normalization to ensure quality and consistency. The trained ANN model was then analyzed using LIME and SHAP to evaluate the contribution of each feature to the prediction outcomes. The results show that both methods are capable of providing meaningful and interpretable explanations, although SHAP demonstrates more consistent and stable interpretations across the dataset. These findings highlight the importance of integrating XAI techniques to enhance model transparency, thereby increasing trust and supporting more reliable decision-making in clinical settings, particularly for diabetes diagnosis.