The integration of artificial intelligence in digital medical diagnostics offers significant benefits but also introduces new risks, including algorithmic malpractice resulting from inaccurate or biased system outputs. This study examines the legal liability framework for algorithmic malpractice using a normative legal research method, analyzing statutory regulations, legal doctrines, and comparative international approaches. The findings indicate that liability remains predominantly placed on physicians, despite evidence that many algorithmic errors originate from design flaws, data bias, or technical failures outside clinical control. Hospitals and algorithm developers also contribute to systemic risks, highlighting the need for a multi-actor liability model. Regulatory reforms are required to establish algorithm audit obligations, risk assessments, human oversight mechanisms, transparency standards, and the adoption of shared or strict liability for developers. This study underscores the necessity of comprehensive regulation to ensure patient protection and legal certainty in the era of medical digitalization.
Copyrights © 2025