The proliferation of "Black Box" Artificial Intelligence systems creates a significant ethical void regarding accountability and user autonomy, fundamentally challenging the right of individuals to understand decisions affecting their lives. This study aims to analyze the moral obligations of AI developers to implement Explainability (XAI) using the rigorous normative framework of Kantian Deontological Ethics. Employing a qualitative research design with conceptual analysis, the study utilizes secondary data from Kant's foundational texts and contemporary literature on algorithmic transparency, applying the Categorical Imperative as the primary lens. The findings conclude that the deployment of non-explainable AI constitutes a direct violation of Kant’s Formula of Humanity, as it reduces users merely to means for achieving computational goals rather than treating them as autonomous, rational agents. Furthermore, the practice fails the Universal Law test, which prohibits the universalization of opacity in decision-making processes. Consequently, the study asserts that Explainability is a non-negotiable moral duty for developers, establishing that predictive accuracy cannot ethically justify the erosion of human autonomy, thereby demanding a paradigm shift from utilitarian efficiency to deontological adherence in AI development.
Copyrights © 2025