Large Language Models (LLMs) have transformed modern chatbots into systems capable of natural interaction and understanding conversational context across various domains, such as education, healthcare, and customer service. However, this advancement presents a major challenge in the form of a "black box" nature that conceals the model's decision-making logic. This condition hinders user trust, complicates the debugging process, and raises ethical concerns in high-risk domains such as medical diagnosis and financial consulting. Explainable AI (xAI) has emerged as an approach to make AI decision-making processes more transparent and comprehensible to both developers and end-users. This study conducts a systematic review of 103 recent studies (2020-2025) to map xAI techniques applied to modern chatbots. The analysis reveals that technical methods such as attention mechanisms and feature importance analysis dominate xAI implementation, with an emerging trend toward the use of natural language explanations for end-users. The main contributions of this research include identifying the trade-off between model performance and interpretability, the need for standardized evaluation metrics, and the limited ecological validity of research conducted primarily in controlled laboratory settings. This review emphasizes that xAI is a fundamental requirement—not merely an additional feature—for building responsible and trustworthy conversational AI systems. The study also proposes future research directions, namely the development of domain-specific xAI frameworks, cross-cultural studies, and the formulation of robust ethical guidelines to ensure that AI benefits can be achieved without compromising accountability and user autonomy.
Copyrights © 2025