The use of Artificial Intelligence (AI) in parole risk assessments in Indonesia is beginning to be considered as a policy alternative to address the problem of correctional overcrowding and the limitations of conventional assessment mechanisms. However, to date, there is no specific and comprehensive legal regulation governing the use of AI in the parole decision-making process. This situation creates a normative gap that could potentially raise issues concerning the principles of legal certainty, justice, and the protection of human rights for prisoners. This study employs normative legal research methods with a statutory and comparative legal approach. The analysis is conducted on the positive legal framework in Indonesia and compared with practices and regulations in several other jurisdictions, including the implementation of the COMPAS system in the United States and risk-based regulations in the EU AI Act in the European Union. The results show that the use of AI in parole assessments, if not accompanied by adequate legal regulations, has the potential to cause problems in the form of algorithmic bias, limited transparency due to black box mechanisms and challenges to the principle of accountability in administrative decision-making in the correctional sector. Based on these findings, this study emphasizes the importance of formulating a clear and measurable legal framework to regulate the use of AI as a supporting instrument for parole assessments. Such regulations should clarify the limitations of AI's function as a decision-making tool, along with human oversight mechanisms, principles of transparency, and accountability, so that its implementation remains aligned with the goals of the correctional system and the principles of human rights protection.