The rapid advancement of Artificial Intelligence (AI) in military applications has raised a range of ethical and governance concerns, particularly regarding the use of Autonomous Weapon Systems (AWS) in making lethal decisions without direct human involvement. While these developments offer strategic advantages, they also introduce significant challenges in ensuring accountability, transparency, and compliance with international humanitarian law. This study aims to systematically examine and map the knowledge structure and global research trends related to ethical and governance issues of AI in the military domain. The research adopts a Systematic Literature Review (SLR) approach based on the PRISMA protocol, combined with bibliometric analysis of 469 articles published between 2020 and 2025. The analysis is conducted using VOSviewer to identify thematic clusters, relationships among research topics, and the overall density of scholarly discourse. The findings reveal seven major thematic clusters, including ethical foundations and human-centric approaches, operational systems and decision-making, robotics and autonomous systems, military applications and strategy, governance and regulatory frameworks, ethical principles and accountability, and technical foundations based on machine learning. Network visualization indicates that ethical issues are closely interconnected with governance as the central focus of the discourse, while density analysis shows that the terms “artificial intelligence,” “ethics,” and “application” dominate the research landscape. The study also highlights a gap between normative ethical frameworks and practical implementation in the development and deployment of AI in military contexts. Therefore, stronger governance frameworks are required to ensure accountability and compliance with international regulations. This research contributes by mapping current research directions and identifying future research opportunities, particularly in developing more adaptive and context-aware AI governance approaches.