The rapid development of artificial intelligence has driven the use of autonomous decision-making systems in various sectors, including public administration, finance, healthcare, and law enforcement. While AI offers greater efficiency and objectivity, its application also poses serious challenges related to algorithmic governance and accountability, particularly when the resulting decisions significantly impact individual rights and obligations. This research aims to critically examine the existing legal framework and regulatory standards addressing the phenomenon of AI-based autonomous decision-making, and to evaluate the extent to which the principles of accountability, transparency, fairness, and legal responsibility can be applied to algorithmic systems. The research method used is a literature review, examining academic sources, international regulations, public policies, and reports from global institutions relevant to AI governance and algorithmic accountability. The results show that traditional legal standards still face limitations in accommodating the complex, adaptive, and often opaque (black box) characteristics of AI. Therefore, this research emphasizes the need for a new regulatory approach that is adaptive, risk-based, and ethically oriented to ensure that the use of AI remains aligned with human rights protection, legal certainty, and public trust in the era of autonomous decision-making.