The integration of Artificial Intelligence (AI) into military drone operations represents a paradigm shift in modern warfare, raising profound questions about ethical governance, human accountability, and technological determinism. This research employs a qualitative methodology, by utilizing socio-technical systems theory to analyze the ethical challenges from the deployment of AI-enabled targeting systems, with a specific focus on the Israel-Palestine conflict as a case study and with a research question of how AI enabled targeting systems can be ethically governed in modern warfare by analyzing their operation as complex socio-technical systems within the Israel – Palestine conflict. Through a comprehensive document analysis of academic literature, reports, and policy documents, this research analyzes the complex interplay between advanced technological capabilities, human decision-making processes, and organizational structures in military operations. The findings reveal that AI drones operate as a complex socio-technical systems where technology, human actors, and institutional frameworks are inseparably interconnected. These systemic complexities lead to a significant erosion of meaningful human control, creates a dangerous accountability vacuum and exposes critical gaps in existing legal and regulatory frameworks. This study concludes that effective ethical governance of AI in warfare cannot be achieved through technological solutions or even minor procedural adjustments. But a socio-technical approach that can addresses the entire system starting from design and deployment to institutional policies and human training to ensure compliance with international humanitarian law and mitigate the risk of civilian harm.
Copyrights © 2025