The advancement of Artificial Intelligence (AI) has introduced new legal and ethical challenges, particularly in the misuse of deepfake technology to produce non-consensual pornographic content. This phenomenon violates individual privacy and human dignity, exposing the inadequacy of Indonesia’s current legal framework and regulatory delay. This research employs a normative juridical method with statute, conceptual, and case approaches. The study reveals that Indonesia’s positive legal framework namely the Electronic Information and Transactions Law, Pornography Law, and Sexual Violence Law has not yet specifically addressed the use of AI for harmful visual manipulation. This legal vacuum (rechtsvacuum) weakens law enforcement and victim protection against gender-based digital violence. As a duty bearer under human rights obligations, the state is responsible for enacting specific AI regulations, strengthening victim protection mechanisms, and implementing a human-rights-based AI governance framework to ensure ethical and just technological development in the digital era.