The rapid development of artificial intelligence (AI) technologies has generated complex challenges for criminal law, particularly regarding the attribution of legal liability when AI systems cause harm or facilitate criminal acts. Traditional criminal law doctrine is grounded in the principles of actus reus and mens rea, presupposing human agency and moral culpability. However, AI systems operate autonomously and lack consciousness, raising fundamental questions about responsibility and accountability. This study examines the concept of criminal liability for AI through a comparative analysis between Indonesia and the European Union. While the European Union has adopted a comprehensive regulatory framework through the EU Artificial Intelligence Act and complementary liability instruments, Indonesia currently relies on general criminal law provisions and sectoral regulations without specific AI governance mechanisms. Using normative and comparative legal methods, this research analyzes doctrinal limitations, regulatory approaches, and emerging liability models, including human-centered liability, strict liability, and electronic personhood. The findings indicate that neither jurisdiction recognizes AI as a criminal subject; however, the European Union applies a risk-based regulatory model that enhances accountability for providers and deployers of high-risk AI systems. This article argues that Indonesia should adopt a hybrid framework combining human-centered criminal liability with risk-based regulatory obligations to address accountability gaps while maintaining doctrinal coherence in criminal law.
Copyrights © 2026