Deepfake technology, a product of artificial intelligence, is widely discussed by the public because of its sophistication in replacing the face of the original subject with the face of another subject in the form of a video or photo. This has been misused to create non-consensual pornographic content that has claimed many victims. Over time, this abuse has been facilitated by the spread of the Telegram conversation application. The abused content is then disseminated by the perpetrators on social media and traded. The research method used in this research is normative juridical research which aims to analyse existing regulations, as well as regulations needed to fill the legal vacuum. Moreover, in the case of crimes using AI, there is no specific regulation, so the vacuum has an impact on the lack of security and comfort for victims of deepfake porn. However, the ITE Law describes AI as electronic agents and electronic systems. Criminal law considers that AI is not a legal subject, so the responsibility for AI crimes is imposed on the users of AI technology itself, especially in the crime of deepfake porn.
Copyrights © 2025