The rapid integration of artificial intelligence (AI) in global healthcare systems offers significant opportunities for improved diagnostic accuracy, clinical efficiency, and accelerated medical decision-making. However, these innovations present complex ethical challenges, particularly regarding patient privacy risks, algorithmic bias, model transparency, and disparities in international regulatory frameworks. This study employs a Systematic Literature Review to examine global medical AI ethics by selecting 16 peer-reviewed articles identified through the PRISMA protocol. The findings indicate persistent weaknesses in data protection, limited bias auditing mechanisms, and unclear accountability structures, all of which threaten core principles of medical ethics. Furthermore, regulatory imbalances between high-income and low-income countries increase the risk of data misuse, especially in jurisdictions with weak digital infrastructure. This study concludes that an integrated ethical framework is essential, encompassing privacy-by-design protections, algorithmic bias mitigation, adoption of explainable AI, strengthened legal accountability, and harmonization of global standards. These insights contribute to policy development and support the advancement of safe, equitable, and patient-centered medical AI applications.
Copyrights © 2025