The rapid adoption of algorithmic systems in public governance has transformed decision-making and service delivery, offering promises of efficiency and transparency. Yet, these technologies raise pressing concerns regarding fairness, bias, and social justice. This study investigates the intersection of digital governance, algorithmic decision-making, and social justice, with particular emphasis on emerging democracies. Employing a qualitative socio-legal approach, the research combines normative analysis of governance regulations, case studies of algorithmic applications in public administration, and interviews with policymakers and technology law experts. Comparative analysis across emerging democracies highlights diverse strategies for addressing equity concerns in algorithmic systems. Findings reveal that while algorithmic systems enhance efficiency, they often reinforce existing inequalities due to insufficient safeguards against bias and discrimination. Moreover, regulatory frameworks remain fragmented and inadequate to ensure fairness and accountability. The study proposes the development of adaptive legal frameworks that integrate transparency, accountability, and citizen engagement into AI governance. By embedding social justice principles into algorithmic regulation, governments can foster inclusive policy design and equitable outcomes. This research contributes to ongoing debates on balancing technological innovation with democratic values, emphasizing the need for governance models that prioritize fairness alongside efficiency.
Copyrights © 2025