The exponential growth of artificial intelligence technology has presented unprecedented challenges to traditional criminal accountability frameworks, particularly in dealing with AI-based digital crimes that operate beyond conventional mens rea doctrines. This study examines the adequacy of Indonesia's criminal law provisions in regulating autonomous AI systems and proposes comprehensive accountability standards to achieve optimal deterrence while ensuring legal certainty. Using normative juridical methodology with legislative, conceptual, and comparative approaches, this study analyzes primary legal materials including Law Number 1 of 2023 concerning the Criminal Code and Law Number 19 of 2016 concerning ITE, complemented by contemporary academic literature on AI criminality and international cybercrime instruments. The findings reveal critical gaps in the current regulatory framework, suggesting that existing provisions are inadequate to address the complexities of AI-based crimes such as deepfake fraud, automated hacking, and algorithmic manipulation. The study proposes a hybrid liability model that integrates the principles of strict liability, negligence-based liability, and vicarious liability adjusted to AI risk categories. The study concludes that effective deterrence requires reconceptualization beyond punitive sanctions to include preventive mechanisms, while legal certainty demands risk-based differentiated standards and capacity building of digital forensics supported by international cooperation frameworks. Keywords: artificial intelligence; criminal liability; digital crime; prevention theory; legal certainty.