The rapid development of artificial intelligence–based military technology poses significant conceptual and normative challenges to the application of International Humanitarian Law (IHL), particularly with regard to the deployment of Lethal Autonomous Weapon Systems (LAWS). The autonomous capacity of such systems to select and engage targets raises complex questions of legal accountability when violations of the laws of armed conflict or civilian harm occur. This article reassesses the doctrine of command responsibility in the context of LAWS by positioning the core principles of IHL—distinction, proportionality, precaution, and accountability—as evaluative benchmarks. Employing a normative juridical approach, this study analyzes the 1949 Geneva Conventions, Additional Protocol I of 1977, and the 1998 Rome Statute of the International Criminal Court, alongside relevant Indonesian national legislation, particularly Law No. 3 of 2002 on National Defense and Law No. 34 of 2004 on the Indonesian National Armed Forces (TNI). The findings demonstrate that although increasing technological autonomy may reduce direct human involvement in lethal decision-making, the legal obligations of military commanders cannot be disregarded. Nevertheless, algorithmic complexity, opacity in decision-making processes (the “black box” problem), and the involvement of multiple actors necessitate a reconceptualization of existing accountability frameworks, including the recognition of shared or joint responsibility. This article argues that in the absence of specific international regulations governing LAWS, a significant accountability gap risks undermining the effective enforcement of IHL in future armed conflicts. Accordingly, it calls for the strengthening of international legal frameworks to explicitly regulate LAWS, ensuring that humanitarian principles remain aligned with the realities of contemporary military operations.
Copyrights © 2025