The rapid deployment of Autonomous Weapon Systems (AWS) is reshaping contemporary defence strategies and testing the adequacy of existing International Humanitarian Law (IHL). This research offers a normative legal analysis of the relationship between autonomous lethal decision-making and core IHL principles, particularly distinction, proportionality, and precautions in attack. Combining doctrinal methods with a systematic bibliometric review of 621 Scopus-indexed publications, the study empirically maps the global debate and identifies structural gaps between advances in military AI and current accountability regimes. The findings reveal a persistent accountability gap when AWS operate unpredictably, undermining individual criminal responsibility and victims’ right to remedy. To address this, the article proposes a reconstruction of IHL centred on robust standards of Meaningful Human Control, algorithmic transparency, and the integration of co-active design as a legal design requirement for AWS. This reconstruction aims to ensure that technological modernisation in warfare does not erode human dignity, but remains anchored in human agency and enforceable legal responsibility. Keywords: Autonomous Weapon Systems; International Humanitarian Law; Meaningful Human Control; accountability gap; co-active design; war torts.
Copyrights © 2026