The integration of Artificial Intelligence (AI) into Business Intelligence (BI) systems has significantly advanced data analysis and decision-making capabilities. However, the inherent "black box" nature of many sophisticated AI models poses considerable challenges to transparency, interpretability, and user trust, hindering their full adoption in critical business contexts. Explainable AI (XAI) emerges as a crucial field to address these challenges by rendering AI decision-making processes understandable and verifiable. This paper investigates the impact of different XAI methodologies on transparency, interpretability, and user trust within BI systems through a mixed-methods study. We specifically evaluate the effectiveness of feature importance techniques (LIME, SHAP) and rule extraction methods (Decision Tree Surrogates) in enhancing user understanding and confidence when interacting with an AI-driven BI prototype focused on customer churn prediction. Our findings reveal that while a baseline black-box model achieved high predictive accuracy, XAI-enhanced scenarios significantly improved user trust and perceived interpretability. Notably, a Decision Tree Surrogate model achieved the best balance between explainability, user trust, and decision accuracy. This research provides empirical insights into tailoring XAI explanations for varying user needs in BI, offering guidelines for integrating XAI to build more ethical, transparent, and trustworthy BI solutions, ultimately fostering greater user acceptance and more informed decision-making.
Copyrights © 2025