The phenomenon of deepfake crimes based on artificial intelligence (AI) demands a reform of criminal liability concepts through the expansion of the culpability principle, allowing the placement of AI as a subject of law. However, the idea of recognizing AI as an independent legal entity (electronic personhood) is considered irrelevant, since AI lacks human-like will and moral autonomy. Therefore, this study proposes a model of criminal liability that extends the culpability principle to providers and users of deepfake technology. Using a normative legal research method based on primary and secondary legal materials, this study comprehensively examines the application of the culpability principle through a comparative approach among various jurisdictions. The findings indicate that the most proportional form of liability is the vicarious liability model, which was initially applied to corporations but can be adapted to the AI context. In this model, software providers may be held liable for acts committed by AI in deepfake crimes, particularly as part of their responsibility toward technology governance regulations. The study recommends establishing national regulations emphasizing governance systems based on risk assessment, risk management, and impact assessment, as practiced in the European Union, Canada, and the United States. In conclusion, reforming criminal liability in the AI era is a strategic step to address the growing prevalence of deepfake crimes and to ensure that the legal system remains adaptive to technological developments.
Copyrights © 2025