The rapid development of Artificial Intelligence (AI), particularly deepfake technology, presents both opportunities and risks, including fraud, disinformation, and digital sexual crimes. In Indonesia, the legal framework primarily holds users accountable while leaving AI developers largely unregulated, creating legal gaps and weakening victim protection. This study examines criminal liability for AI developers under Indonesian law and proposes an ideal model inspired by the European Union, where developers face obligations for transparency, labeling, and liability for defective AI products. Using normative legal research and comparative analysis, the study finds that shifting accountability toward developers would enhance victim protection, close regulatory gaps, and establish a balanced legal framework that aligns AI innovation with responsibility.
Copyrights © 2025