The increasing integration of Artificial Intelligence (AI) in digital ecosystems has raised new legal challenges regarding data privacy violations. In Indonesia, the normative framework for personal data protection particularly the Personal Data Protection Act (Law No. 27 of 2022) remains inadequate in addressing the autonomous nature of AI systems that process and exploit personal data without direct human intervention. This research aims to reconstruct a legal accountability model for actors who misuse AI in ways that lead to personal data violations. The study employs a normative juridical method using a statutory, conceptual, and comparative approach, referencing the European Union’s GDPR as a benchmark. The findings reveal that Indonesia’s current legal framework lacks clarity in assigning responsibility among key actors, such as developers, data controllers, and platform providers. The absence of provisions concerning algorithmic profiling, training data legality, and automated decision-making weakens the protection of individuals' digital rights. In contrast, international models particularly the GDPR offer a multi-tiered responsibility structure, prohibit fully automated decisions affecting individuals, and impose strict liability for data misuse. This research also demonstrates that adopting principles such as vicarious liability, corporate accountability, and risk-based regulation would fill regulatory gaps and align Indonesian law with international standards. The practical value of this work lies in its proposed model for reconstructing Indonesia’s data protection regime. It introduces legal tools that anticipate the systemic risks of AI while ensuring that legal responsibility is clearly distributed across all entities involved in AI deployment. This framework supports the development of a more responsive and equitable legal system in the era of autonomous technologies
Copyrights © 2024