Artificial Intelligence (AI) technologies are central to global digital transformation, promising efficiency and improved decision-making. However, algorithmic bias, systematic and unfair discrimination embedded in AI remains a pressing concern, especially in the Global South where these technologies are often deployed without contextual adaptation. This paper examines how data and value systems from the Global North shape AI development, contributing to unfair outcomes in developing countries. Using a qualitative literature review grounded in critical data studies and postcolonial theory, it explores digital colonialism and AI systems misaligned with local socio-cultural realities. Key challenges include lack of representative datasets, cultural misalignment, and weak regulatory frameworks, leading to exclusion and discrimination. The study advocates for a human rights-centered, context-sensitive AI governance framework emphasizing transparency, local participation, ethical pluralism, and capacity-building. Reframing algorithmic bias as a socio-political issue highlights the urgent need for systemic transformation to ensure AI promotes equitable and just outcomes globally.
Copyrights © 2025