The rapid development of Artificial Intelligence (AI) technologies has significantly transformed digital content production, particularly through deepfake technology, which enables the creation of hyper-realistic audio-visual representations without consent. This phenomenon poses substantial threats to individual privacy and the integrity of digital identity, especially in legal systems that lack responsive regulatory frameworks. This study aims to analyze the extent to which the Indonesian legal system is equipped to protect individual privacy rights from deepfake threats and to examine regulatory models from other jurisdictions as comparative references. Employing a normative juridical approach combined with comparative legal analysis, the research reviews national laws, international regulations, academic literature, and case studies involving non-consensual synthetic content. The findings reveal that Indonesia lacks specific legal instruments addressing digital image and voice authorization. Although Law No. 27 of 2022 on Personal Data Protection and the Electronic Information and Transactions Law are in place, they do not explicitly cover digital identity manipulation. Indonesia's regulatory framework remains insufficient compared to regulations such as the European Union’s General Data Protection Regulation (GDPR) and AI Act, or the United Kingdom’s Online Safety Act. The study identifies five legal dimensions relevant to deepfake regulation, with three of them, including digital voice, image rights, and digital authorization, currently unregulated. This research contributes to the discourse on digital law by highlighting the urgency for regulatory reform and the recognition of digital identity as a protected legal right within Indonesia’s legal system.