Integrating Artificial Intelligence (AI) into military operations creates a paradigm shift, introducing a profound tension between operational opportunities and severe risks to strategic stability. This paper conducts a systematic literature review to investigate this challenge, focusing on the transformation of Military Decision-Making. The analysis confirms that while AI offers significant capabilities in intelligence and logistics, it also introduces a triad of technical, strategic, and human-centric risks. These risks fuel a global arms race and create a crisis of accountability, particularly with the development of Autonomous Weapons. The central problem identified is a critical "governance gap," where the rapid, geopolitically-driven adoption of military AI has dangerously outpaced the development of effective oversight. This study addresses this gap by synthesising fragmented literature into an integrated, problem-solving framework. It argues that robust Ethical Governance is necessary to respond to these complex challenges. The operationalisation of Meaningful Human Control (MHC) is the cornerstone for closing the "responsibility gap" and ensuring that human agents remain accountable for using force. The paper concludes that a prioritised, multi-layered governance strategy—from short-term national testing standards to a long-term international autonomy treaty is essential. Pursuing AI-driven military advantage without such reforms will lead to unacceptable strategic instability and ethical compromise, undermining the security it intends to enhance.
                        
                        
                        
                        
                            
                                Copyrights © 2025