The rapid advancement of Artificial Intelligence (AI) promises to revolutionize public administration by enabling more precise, data-driven, and predictive policy decision-making. Governments worldwide are increasingly aspiring to leverage AI to solve complex societal problems. However, the transition from traditional bureaucratic methods to AI-augmented governance is fraught with systemic difficulties. This study critically analyzes the multifaceted challenges hindering the effective adoption of AI in public policy decision-making. Utilizing a qualitative approach through a systematic review of current administrative practices and technological frameworks, this research categorizes obstacles into technical, organizational, and ethical dimensions. The findings demonstrate that technical challenges are not merely about infrastructure but involve deep-seated issues regarding data quality, privacy, and the interoperability of legacy systems. Organizationally, the study identifies significant resistance due to bureaucratic inertia and a critical shortage of digital talent within the civil service, leading to a disconnect between technical developers and policy practitioners. Furthermore, ethical dilemmas present the most precarious barrier; specifically, the risks of algorithmic bias, lack of explainability (the "black box" phenomenon), and undefined accountability mechanisms threaten public trust. This article argues that without a comprehensive regulatory framework and cultural transformation within government agencies, AI adoption risks exacerbating existing inequalities rather than solving them. The study concludes by proposing a strategic governance model that prioritizes human-centric AI design, continuous capacity building for administrators, and rigorous ethical auditing to ensure that AI serves the public interest effectively.
Copyrights © 2025