The growing reliance on algorithmic systems in digital platforms has transformed the nature of legal governance and content moderation worldwide. While automation has improved efficiency, it has also created structural problems with transparency, fairness, and accountability. This study aims to examine how algorithmic governance influences procedural justice and public trust within digital environments. Using a qualitative-empirical legal method, the research analyzes 86 regulatory documents, 120 policy reports, and 300 moderated content cases from 2018 to 2024, and is complemented by interviews with 10 digital law experts. The findings reveal that algorithmic moderation increased by 45 percent during the observed period, yet 28 percent of deleted content was identified as non-violative, indicating a significant over-moderation bias. The correlation between transparency and user trust reached 0.85, showing that procedural clarity and appeal mechanisms strongly influence public perception of fairness. Furthermore, the study demonstrates that platforms with higher algorithmic accountability indices display 30 percent better compliance with ethical moderation standards. These results highlight that regulatory fragmentation and the absence of binding oversight mechanisms contribute to inconsistent digital justice outcomes. The study contributes to the theoretical discourse by identifying algorithmic systems as de facto legal actors in the digital domain. The novelty of this research lies in establishing algorithmic accountability as a measurable dimension of digital justice, integrating legal, technical, and sociological perspectives to propose a holistic framework for global algorithmic governance.
Copyrights © 2025