The development of Artificial Intelligence (AI) has brought significant transformation across various sectors of life, while simultaneously introducing new legal complexities, particularly concerning liability for damages caused by AI-based systems. Within the framework of Indonesian positive law, AI is still positioned as a legal object functioning as part of an electronic system or agent, and therefore does not possess the capacity to act as an independent legal subject. Consequently, legal liability remains attributed to human actors, namely developers and users. This study aims to analyze the construction of legal liability for AI developers and users under Indonesian positive law, as well as to formulate an expanded model of liability from the perspective of legal philosophy in order to achieve a balance between victim protection, justice, and technological innovation. The research employs a normative juridical method with statutory and conceptual approaches, analyzed qualitatively through deductive reasoning. The findings indicate that fault-based liability remains the dominant paradigm; however, it faces significant limitations in addressing the autonomous and black box characteristics of AI, which complicate the establishment of causality. Therefore, it is necessary to develop more adaptive liability models, such as strict liability, shared liability, and hybrid liability, supported by philosophical foundations including utilitarianism, deontology, and distributive justice. In addition, strengthening the principles of transparency (explainability), due diligence, and collective responsibility mechanisms is crucial in addressing the emerging responsibility gap. In conclusion, a comprehensive reformulation of legal regulation grounded in the values of social justice is urgently required to ensure that the development of AI remains aligned with the principles of legal certainty and the protection of human rights.