The integration of advanced algorithms and artificial intelligence (AI) into public administration has revolutionized decision-making processes, offering enhanced efficiency and scalability. However, this technological advancement presents significant challenges to traditional administrative law frameworks, particularly concerning transparency, accountability, and the protection of fundamental rights. The opacity of algorithmic decision-making, often referred to as the "black box" problem, complicates the ability of individuals to understand and contest administrative decisions that significantly impact their lives. Moreover, the potential for embedded biases within AI systems raises concerns about discrimination and fairness in public service delivery. This paper examines the critical role of administrative law in regulating the deployment of advanced algorithms within public administration. It analyzes existing international regulatory approaches, including the European Union's Artificial Intelligence Act and the Council of Europe's Framework Convention on Artificial Intelligence, which emphasize risk-based classification, transparency, human oversight, and accountability mechanisms. Drawing from these models, the paper proposes a comprehensive legal framework that incorporates mandatory algorithmic impact assessments, enforceable transparency standards, and institutional oversight to ensure that AI applications in public administration align with democratic principles and human rights.