The revolution of Artificial Intelligence (AI) has presented fundamental challenges to the classical civil law framework, particularly Article 1365 of the Indonesian Civil Code (KUHPerdata), which is grounded in the concept of "fault" (schuld) . This study analyzes the inadequacy of Article 1365 in addressing damages caused by autonomous and "black box" AI systems . Using a normative legal research method with conceptual and comparative approaches, this article argues that proving the element of "fault" on the part of the developer, operator, or the AI itself is practically impossible . The autonomous nature of AI severs the traditional chain of causality, while its "black box" characteristic obstructs transparency in evidentiary proceedings . Consequently, a potential legal vacuum arises that is detrimental to victims. As a solution, this study proposes a paradigm shift from fault-based liability to strict liability, or at least risk-based liability, for AI operators or developers. This new paradigm is considered more capable of providing legal certainty and protection for victims without the burden of proving elusive faults .
Copyrights © 2025