This study analyzes the fundamental differences between common law and civil law systems in responding to criminal liability for artificial intelligence-based cybercrime. The background of the study covers the significant escalation of crimes utilizing AI, with 87% of global organizations experiencing AI-based attacks in 2024, AI-based fraud losses predicted to reach $40 billion by 2027, and a 223% increase in the trade of deepfake tools on dark web forums. The main problem identified by the research is a critical paradox: as AI technology becomes increasingly sophisticated in facilitating cybercrime, the gap between existing legal regulations and operational realities in the field widens, allowing criminals to exploit ambiguities in accountability to avoid responsibility. The research methodology uses a qualitative comparative legal analysis approach through analysis of primary legal documents from both systems, with case studies in four jurisdictions: the United States and the United Kingdom for common law, and Germany and France for civil law, as well as the supranational framework of the EU AI Act. The results show that the common law system has developed three models of liability—perpetration-via-another, natural-probable-consequence liability, and direct liability—but still faces fundamental difficulties in attributing mens rea to AI systems that lack moral consciousness. In contrast, civil law systems adopt a provider-deployer approach with the mechanisms of Organisationsverschulden in Germany and responsabilité pénale in France, which allow for liability based on organizational negligence, although they often lag behind in responding to technological developments. This study concludes that a hybrid approach is needed that combines the clarity of civil law codification with the adaptive flexibility of common law, as well as cross-jurisdictional harmonization to overcome the challenges of law enforcement in an increasingly autonomous AI era.
Copyrights © 2025