Artificial Intelligence (AI) represents a transformative leap in science and technology that redefines how humans work, interact, and make decisions. Despite its contributions to efficiency and innovation across sectors such as health, education, and governance, AI also generates systemic risks to privacy, equality, non-discrimination, freedom of expression, and the right to work. These challenges expose a normative gap between technological development and the existing international human rights framework. Drawing from key human rights instruments such as the Universal Declaration of Human Rights (UDHR), International Covenant on Civil and Political Rights (ICCPR), and International Covenant on Economic, Social and Cultural Rights (ICESCR), as well as global AI governance frameworks including the UNESCO Recommendation on the Ethics of Artificial Intelligence (2021), OECD AI Principles (2019), and EU AI Act (2024), this article formulates a human rights based governance framework for embedding human rights principles throughout the AI lifecycle. It identifies six intersecting principles that must guide each stage of the AI lifecycle: human dignity, legality, necessity and proportionality, equality and non-discrimination, privacy and data protection, transparency and explainability, and meaningful human oversight with accountability and remedy. The study argues that a human rights by design approach, integrating these principles from conception to deployment, is essential to ensure that AI systems remain lawful, fair, and transparent. Finally, the article emphasizes the urgency for states, particularly Indonesia, to harmonize national AI governance with international standards to safeguard human dignity while fostering innovation.
Copyrights © 2025