Abstract This research examines legal issues related to the misuse of deepfake technology, which impacts the legal protection of victims, particularly in the case of deepfake pornography and AI-based child exploitation in Indonesia. While this artificial intelligence technology opens up opportunities for innovation, its misuse has given rise to forms of cybersexual violence, such as AI-generated child sexual abuse material (AI-CSAM), which is not yet explicitly regulated in national law, creating a legal vacuum (rechtsvacuum). Indonesian positive laws, such as the ITE Law, the Pornography Law, and the Child Protection Law, do not explicitly accommodate the characteristics of AI-engineered digital content, hampering law enforcement and victim protection. Law enforcement faces challenges in obtaining digital forensic evidence and tracking perpetrators, who often operate anonymously and across borders. Comparative studies show that countries such as the UK have adopted more progressive regulations, including obligations on digital platforms and the establishment of independent oversight bodies. Therefore, specific legal and regulatory reforms are needed to comprehensively regulate deepfake technology, including aspects of prevention, prosecution, and the responsibility of platform providers. Improving digital literacy and legal awareness is also crucial to minimizing the negative impact of this technology on victims, including victims of online gender-based violence. Keywords: Deepfake Pornography, Legal Protection for Victims, Legal Vacuum (Rechtsvacuum)