This study introduces a transparent deep learning framework for credit default analysis that integrates Artificial Neural Networks (ANN) with dual interpretability mechanisms SHapley Additive Explanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME). Using the Default of Credit Card Clients dataset from the UCI Machine Learning Repository, the research develops an optimized model that combines predictive precision with explanatory transparency. The ANN model achieved an accuracy of 81.8% and an AUC of 0.77, outperforming conventional classifiers such as XGBoost and LightGBM while maintaining interpretive clarity. The hybrid SHAP–LIME configuration provides both global and local explanations, identifying repayment status (PAY_0), billing amount (BILL_AMT1), and credit limit (LIMIT_BAL) as the most influential predictors. Empirical findings confirm that interpretability enhances trust, auditability, and regulatory alignment without sacrificing statistical performance. The framework offers a methodological contribution to transparent financial modeling, bridging the gap between algorithmic precision and human interpretive accountability. It advances the paradigm of responsible credit risk management by transforming black-box neural architectures into auditable, evidence-based decision tools for financial institutions
Copyrights © 2025