Technological developments in the modern era have experienced rapid growth, one of which is through the advancement of Artificial Intelligence (AI). This technology has now become an integral part of people's daily lives because it can provide convenience, efficiency, and innovation in various fields. However, behind the benefits offered, AI also carries potential risks, especially when misused. One of the most worrying forms of misuse is deepfakes, namely AI-based digital content manipulation that can convincingly imitate a person's voice, face, and movements. Deepfakes have triggered various digital crimes, such as identity forgery, the creation and distribution of non-consensual pornographic content including sexual exploitation, blackmailing, the spread of fake news (hoaxes), digital terror, fraud, and defamation. The increasingly sophisticated level of AI in manipulating data demands swift action, appropriate regulations, and effective oversight strategies from the government to anticipate its negative impacts. This research uses a normative juridical method with a statute approach and a conceptual approach. This approach allows for a comprehensive analysis of the existing legal framework, the concept of legal protection, and the urgency of establishing new regulations related to AI technology. The research findings indicate that the government needs to take a number of strategic steps, including: (1) drafting specific regulations governing the use and limitations of AI, particularly regarding deepfakes; (2) developing and implementing effective deepfake detection technology; (3) providing protection, recovery, and rehabilitation mechanisms for victims; and (4) implementing widespread public education to raise public awareness of the risks of AI misuse.
Copyrights © 2025