This article examines the regulation of AI development and usage at the national and international levels using a metaethical approach. The defended thesis is that AI cannot be positioned as a moral agent because it lacks the feature of consciousness (an epistemological problem). The subjective reality formed by human thought and consciousness differs from the reality formed by AI's computational processes (an ontological problem). This article is an articulation of research categorized as a literature study with qualitative data analysis, referring to key documents on AI ethics at the national and international levels. The conclusion drawn is that all AI actions are subordinate to human actions and are therefore never value-free. AI is not a moral agent, and the regulation of AI development, application, and usage requires an ethical framework that can accommodate moral progress in a manner that aligns linearly with the rapid innovation progress of AI over time. In other words, AI regulation must operate within a dialectical framework that allows space for evaluation and re-contextualization to remain relevant to the dynamics of social and cultural realities.
Copyrights © 2024