The black box effect in artificial intelligence models has raised serious concerns regarding transparency and accountability. Explainable Artificial Intelligence (XAI) has emerged as a solution by offering interpretability and clearer reasoning in complex models. This study aims to analyze the development of XAI research and identify dominant approaches used to overcome the limitations of black box models. A bibliometric method was applied using data from the Scopus database, focusing on publication trends, author keywords, and frequently applied methods. The analysis shows that research on XAI has grown rapidly, with the number of publications increasing more than tenfold within the observed period. The trend demonstrates that interpretability is becoming a central aspect of artificial intelligence research. Keyword analysis highlights the strong association of XAI with machine learning and deep learning, while method analysis reveals that SHAP and LIME are the most dominant techniques, supported by Grad CAM and Surrogate Models in more specific applications. These findings confirm that XAI is not only an academic discourse but also an urgent response to the challenges of modern AI systems, ensuring that models are not only accurate but also transparent, understandable, and trustworthy for decision making.
Copyrights © 2025