In the rapidly evolving era of information technology, the emergence of "deepfakes" hyper-realistic digital manipulations powered by artificial intelligence has introduced complex and pressing challenges to the field of victimology. These synthetic media are increasingly used for cybercrime, online harassment, defamation, and disinformation, leading to serious psychological, reputational, and legal consequences for victims. This study employs a normative legal research method that integrates conceptual, comparative, and futuristic approaches. The conceptual approach explores the legal and psychosocial dimensions of digital victimization; the comparative approach identifies legal responses in various jurisdictions; while the futuristic approach is used to predict the trajectory of deepfake threats based on AI development trends and emerging digital behaviors. Unlike previous generalist analyses, this research provides concrete findings: it identifies four dominant forms of digital exploitation through deepfakes namely non-consensual pornography, political disinformation, financial scams, and reputational sabotage. The study also reveals that psychological trauma, reputational harm, and repeat victimization are the most pressing victimological issues in deepfake cases. By applying content analysis to real-world cases, this paper builds a framework for understanding how deepfakes transform the victim–offender dynamic, and it proposes forward-looking strategies for legal reform, victim protection, and digital literacy. This study contributes to filling the current academic gap by offering a victim-centered perspective on the legal and psychosocial consequences of synthetic media, thereby promoting more inclusive and adaptive responses in the face of evolving digital threats.
Copyrights © 2025