Public-sector AI is often framed as a technical upgrade to governance, yet its design and deployment are saturated with power. This article develops an operational lens—the “capillaries of power”—to analyze how code, data, and algorithmic architectures shape public accountability. Using a critical thematic synthesis of recent scholarship (2022–2025), we map four recurring modalities through which power is enacted in AI governance: displacement of responsibility, epistemic opacity, embedded bias, and algorithmic surveillance. We detail the analytic workflow (from corpus selection and initial coding to inductive–deductive theme building and argumentative validation) and translate insights into actionable counter-power protocols: meaningful participation, public auditability, human oversight at decisive junctures, and risk-proportionate constraints on high-risk applications. Illustrative policy implications are drawn for Indonesia, including human-in-final-loop arrangements in social assistance targeting, public scorecards for fiscal analytics, ex-ante bias testing for AI-assisted recruitment, and moratorium-by-design for biometric surveillance in “smart city” systems. The study’s contribution is twofold: conceptually, it operationalizes a power-analytic vocabulary for AI governance; practically, it offers a minimal checklist that policy makers in developing countries can use to ensure that AI adoption strengthens—rather than weakens—democratic accountability and social justice.
Copyrights © 2025