The increasing use of artificial intelligence and automated decision-making systems in digital public services has created new challenges for administrative law, particularly regarding transparency, accountability, and citizens’ procedural rights. This study examines algorithmic transparency as a legal obligation of government institutions in AI-based public service delivery. Using a normative juridical method with statutory, conceptual, and comparative approaches, this article analyses how the right to explanation can be constructed as part of administrative due process, reason-giving, and good administration. The findings show that the use of algorithmic systems does not reduce the government’s responsibility to provide lawful, reasonable, and reviewable decisions. Instead, the complexity of AI-based decision-making strengthens the need for meaningful explanations that are understandable, case-relevant, and useful for citizens affected by public decisions. This study argues that the right to explanation should not be limited to technical disclosure of algorithmic models, but should include information on whether AI was used, how it influenced the decision, what data and criteria were considered, and what remedies are available. The novelty of this article lies in positioning algorithmic transparency within the doctrinal framework of administrative law, rather than treating it solely as an ethical or technological issue. The study contributes to the development of accountable, citizen-centred, and legally grounded AI governance in digital public administration.
Copyrights © 2026