The emergence of a new type of weapon called the Autonomous Weapon System, which integrates artificial intelligence into military operations, has led to major changes in the way armed conflict is conducted. Where, the Autonomous Weapons System is able to act like a human because when activated it can select, determine, attack, injure, and even kill targets without further intervention by humans. Therefore, AWS weapon technology has the potential to create a “dehumanization” phenomenon in warfare and raises serious concerns, especially with the system's ability to distinguish between military targets and civilians. The use of Israel’s “Habsora” system in Gaza illustrates serious violations of international humanitarian law, as evidenced by civilian casualties. The number of civilian casualties has sparked debate in the community regarding the suitability of its use as regulated by international humanitarian law. This research was prepared using a normative juridical research method based on a statutory approach to provide descriptive analysis and examine the regulatory framework surrounding AI in armed conflict. Results show that the lack of comprehensive regulation complicates legal liability, particularly when AI malfunctions due to poor quality or misuse, potentially implicating both creators and users. Therefore, there is a need for a regulation related to AWS, either a complete ban or rules to limit the use of AWS. This research, which aims to discuss the urgency of the legality of AWS use and how the concept of legal liability is provided, is expected to contribute to the development of a comprehensive legal framework and ensure the responsible and ethical use of AI in military operations.
Copyrights © 2025