General Background: As technological advancements continue to evolve, the proliferation of AI tools raises significant ethical and legal concerns, particularly regarding their misuse in creating deepfake pornography. Specific Background: This phenomenon poses serious risks to individuals, especially social media users and public figures who may fall victim to manipulated content that is shared maliciously. Knowledge Gap: Despite growing awareness, there is insufficient exploration of the legal frameworks and enforcement mechanisms addressing deepfake-related offenses, particularly in the context of Indonesia. Aims: This research seeks to analyze law enforcement strategies against Deepfake Porn AI cases and the implications for victims of AI misuse. Results: Employing a normative juridical methodology, this study reviews primary legislation—including the ITE Law, Pornography Law, Copyright Law, and the New Criminal Code—as well as relevant secondary sources. The findings indicate that while existing laws provide some recourse, there is a critical need for better enforcement and legal clarity. Novelty: This research highlights the unique challenges posed by deepfake technology and proposes reforms to existing legal frameworks to enhance protection for victims. Implications: The recommendations advocate for improved criminal complaint mechanisms and civil lawsuit avenues for victims, alongside a call for progressive legal reforms governing Artificial Intelligence to effectively address the evolving landscape of digital misuse and ensure justice for affected individuals.