The rapid advancement of Artificial Intelligence (AI) in the medical field has significantly transformed clinical diagnosis by improving accuracy, efficiency, and personalized patient care. However, this innovation also raises complex ethical issues, particularly when viewed through the lens of deontological ethics, which emphasizes moral duties, professional responsibility, and respect for patient rights. This study aims to examine the deontological ethical challenges in fulfilling patient rights within AI-assisted clinical diagnosis. The method employed is a qualitative literature review, conducted through scientific databases such as Google Scholar, PubMed, and ResearchGate, focusing on publications from 2020 to 2025. A total of 25 journal articles were selected based on inclusion and exclusion criteria relevant to the research topic. The review reveals that major ethical challenges include a lack of algorithmic transparency (black box issue), potential data bias and discrimination, privacy risks, shifting of professional responsibility, and diminished patient autonomy in medical decision-making. These challenges directly affect the fulfillment of patients’ fundamental rights to information, privacy, justice, and autonomy. Therefore, the implementation of AI in clinical diagnosis must be accompanied by strong adherence to deontological principles, robust ethical regulations, and multidisciplinary collaboration among healthcare professionals, technologists, and policymakers to ensure that technology enhances, rather than replaces, human values and moral responsibility in medical practice.
Copyrights © 2026