Fauziyah, Rizma
Unknown Affiliation

Published : 1 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 1 Documents
Search

Analysis of the Moral Obligations of AI Developers Thru the Principle of Explainability in the Perspective of Kantian Deontological Ethics: A Qualitative Study Fauziyah, Rizma; Winarno, Agung; Subagyo, Subagyo
The Eastasouth Journal of Information System and Computer Science Vol. 3 No. 02 (2025): The Eastasouth Journal of Information System and Computer Science (ESISCS)
Publisher : Eastasouth Institute

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.58812/esiscs.v3i02.816

Abstract

The proliferation of "Black Box" Artificial Intelligence systems creates a significant ethical void regarding accountability and user autonomy, fundamentally challenging the right of individuals to understand decisions affecting their lives. This study aims to analyze the moral obligations of AI developers to implement Explainability (XAI) using the rigorous normative framework of Kantian Deontological Ethics. Employing a qualitative research design with conceptual analysis, the study utilizes secondary data from Kant's foundational texts and contemporary literature on algorithmic transparency, applying the Categorical Imperative as the primary lens. The findings conclude that the deployment of non-explainable AI constitutes a direct violation of Kant’s Formula of Humanity, as it reduces users merely to means for achieving computational goals rather than treating them as autonomous, rational agents. Furthermore, the practice fails the Universal Law test, which prohibits the universalization of opacity in decision-making processes. Consequently, the study asserts that Explainability is a non-negotiable moral duty for developers, establishing that predictive accuracy cannot ethically justify the erosion of human autonomy, thereby demanding a paradigm shift from utilitarian efficiency to deontological adherence in AI development.