This study examines the legal implications of Large Language Models (LLMs) within the Indonesian regulatory framework. As AI technologies rapidly evolve, Indonesia relies primarily on general laws on data protection, electronic systems, civil liability, and intellectual property to govern AI deployment. However, these laws were not specifically designed to address the autonomous and generative nature of LLMs. This research employs a normative legal methodology to analyze statutory provisions, identify regulatory gaps, and evaluate constitutional principles relevant to AI governance. The findings reveal fragmentation, ambiguity in training data legality, uncertainty in liability allocation, and limited transparency requirements. The study proposes a risk based and accountability-oriented governance framework that harmonizes existing sectoral regulations while strengthening human rights protection. By developing a coherent regulatory approach, Indonesia can enhance legal certainty, mitigate technological risks, and promote responsible innovation in the era of artificial intelligence.
Copyrights © 2026