Explainable artificial intelligence (XAI) uses artificial intelligence (AI) tools and techniques to build interpretability in black-box algorithms. XAI methods are classified based on their purpose (pre-model, in-model, and post-model), scope (local or global), and usability (model-agnostic and model-specific). XAI methods and techniques were summarized in this paper with real-life examples of XAI applications. Local interpretable model-agnostic explanations (LIME) and shapley additive explanations (SHAP) methods were applied to the moral dataset to compare the performance outcomes of these two methods. Through this study, it was found that XAI algorithms can be custom-built for enhanced model-specific explanations. There are several limitations to using only one method of XAI and a combination of techniques gives complete insight for all stakeholders.
Copyrights © 2024