This article examines the effectiveness of Indonesian criminal law regulations, particularly the ITE Law, the Personal Data Protection Law, and the New Criminal Code, in criminalizing deepfakes as an artificial intelligence–based cybercrime with increasingly complex, multidimensional, and socio-politically disruptive impacts. This study employs a normative juridical method through in-depth qualitative analysis, an examination of 15 court decisions from 2023–2025, and a comparative approach with international frameworks such as the EU AI Act and the US DEEPFAKES Accountability Act. The findings reveal that existing regulations remain largely reactive, lack a technical definition of deepfake, and fail to satisfy the principles of legal certainty (lex certa), preventive and repressive legal protection, proportionality of sanctions, and the balance between freedom of expression and privacy protection. Limitations in digital forensic evidence, inadequate law enforcement capacity to verify AI-generated content, and low public digital literacy contribute to systemic enforcement ineffectiveness and heightened victim vulnerability. The novelty of this article lies in its integrated approach combining positive legal analysis, open-source digital forensic technology, and digital literacy as preventive instruments within a unified theoretical framework. This study recommends regulatory reform through the explicit definition of deepfake, inter-legislative harmonization, the enhancement of accessible forensic technologies, and the strengthening of national digital literacy to establish an adaptive and just criminal law system in the digital era. Artikel ini mengkaji efektivitas regulasi hukum pidana Indonesia, khususnya Undang-Undang Informasi dan Transaksi Elektronik, Undang-Undang Perlindungan Data Pribadi, dan Kitab Undang-Undang Hukum Pidana Baru, dalam mengkriminalisasi deepfake sebagai bentuk kejahatan siber berbasis kecerdasan buatan yang semakin kompleks, multidimensional, dan berpotensi mengganggu stabilitas sosial-politik. Penelitian ini menggunakan metode yuridis normatif dengan analisis kualitatif mendalam, studi terhadap 15 putusan pengadilan periode 2023–2025, serta pendekatan komparatif dengan kerangka internasional seperti EU AI Act dan US DEEPFAKES Accountability Act. Hasil penelitian menunjukkan bahwa regulasi yang berlaku masih bersifat reaktif, belum memuat definisi teknis deepfake, serta belum memenuhi asas kepastian hukum (lex certa), fungsi perlindungan preventif dan represif, proporsionalitas sanksi, serta keseimbangan antara kebebasan berekspresi dan perlindungan privasi. Hambatan pembuktian forensik digital, keterbatasan kapasitas penegak hukum dalam memverifikasi konten berbasis AI, dan rendahnya literasi digital masyarakat menciptakan ketidakefektifan penegakan hukum yang sistemik dan memperbesar kerentanan korban. Kebaruan artikel ini terletak pada integrasi analisis hukum positif, teknologi forensik open source, dan literasi digital sebagai instrumen preventif dalam satu kerangka teoretik terpadu. Artikel ini merekomendasikan reformasi regulasi melalui perumusan definisi deepfake, harmonisasi antar-undang-undang, penguatan forensik digital yang terjangkau, serta pengembangan literasi digital nasional guna mewujudkan sistem hukum pidana yang adaptif dan berkeadilan di era digital.
Copyrights © 2025