As artificial intelligence increasingly governs online content moderation, concerns have mounted over its implications for freedom of expression and democratic participation. This paper aims to examine the legal and human rights challenges posed by AI-driven content filtering, with a focus on the emergence of chilling effects and unequal impacts across user groups. Using legal doctrinal analysis, this study interrogates how algorithmic moderation models operate and how they align—or fail to align—with international human rights norms. The findings reveal that AI systems frequently suppress lawful speech, especially from marginalised communities, due to biased training data and opaque decision-making processes. Furthermore, existing regulatory responses remain fragmented, lacking the transparency, accountability, and normative clarity required to uphold free expression. Drawing from recent UN reports and resolutions, the paper highlights growing international critiques and supports calls for human rights-based governance to ensure AI fosters an inclusive, rights-respecting digital age.
Copyrights © 2025