The rapid and opaque development of Lethal Autonomous Weapons Systems (LAWS) has created a profound crisis in the foundational frameworks of International Humanitarian Law (IHL). This paper argues that the current diplomatic discourse within the United Nations Convention on Certain Conventional Weapons, centered on the nebulous concept of “Meaningful Human Control,” is insufficient and strategically stalled. It posits that the advent of sophisticated artificial intelligence driven targeting, as seen in contemporary conflicts, necessitates a fundamental shift in the legal paradigm. The analysis contends that IHL's core principles of distinction, proportionality, and precaution in attack cannot be authentically complied with by opaque algorithms whose decision making processes are inscrutable and whose parameters may be shaped by biased data sets. The paper examines how the deployment of LAWS fractures the chain of legal accountability, creating a responsibility gap where no human can be held legally responsible for an unlawful algorithmic kill decision. Moving beyond critique, the paper proposes a new regulatory framework based on Algorithmic Accountability. This framework demands legally binding prohibitions on autonomy in critical functions, mandatory human rights impact assessments, transparent algorithmic auditing, and the establishment of an international registry for military artificial intelligence systems. This research aims to break the diplomatic impasse by providing a concrete, legally rigorous pathway to govern the weaponization of artificial intelligence before its integration erodes the very essence of humanitarian law.
Copyrights © 2025