As artificial intelligence (AI) systems increasingly permeate decision-making processes across sectors—from autonomous vehicles to predictive algorithms in finance and law enforcement—traditional frameworks of criminal liability face unprecedented challenges. This article critically examines the adequacy of existing criminal law doctrines in attributing liability when harm arises from autonomous or semi-autonomous AI actions. It explores the tension between actus reus and mens rea in cases involving algorithmic behavior, and interrogates whether AI entities can or should be treated as legal subjects under penal law. Through a comparative legal analysis of jurisdictions including the United States, the European Union, Japan, and Indonesia, the study identifies divergent approaches to regulating AI-related harm and assigning culpability. The article highlights emerging models such as strict liability, vicarious liability, and hybrid regulatory frameworks, and evaluates their potential for adaptation within Indonesia’s evolving legal system. Special attention is given to the role of developers, corporations, and state actors in shaping accountability mechanisms. The paper concludes by proposing a normative framework for reimagining criminal liability in the age of AI—one that balances innovation with legal certainty, and integrates ethical safeguards, technological transparency, and procedural fairness. This framework aims to inform future legislative reform in Indonesia and contribute to global discourse on AI governance and criminal justice.
Copyrights © 2025