This conceptual article examines the ethical crisis in the global social media ecosystem following Meta's 2025 policy shift, which replaced human fact-checkers with AI-driven Llama automation and "Community Notes." Using a qualitative descriptive method and the TARES test framework, the study analyzes the phenomenon of "ethical outsourcing"—the delegation of moral responsibility from professional curation to unverified algorithms and crowdsourced labor. Findings reveal a profound paradox where operational efficiency compromises informational integrity, as AI systems frequently fail to grasp cultural nuances and linguistic complexities, leading to the systematic verification of disinformation. Furthermore, the reliance on community-based moderation introduces risks of collective bias and manipulation by organized interest groups, violating the principles of equity and social responsibility. This research offers a theoretical contribution by redefining digital transparency as "radical honesty" concerning technological limitations. It argues for the restoration of the "Human-in-the-Loop" model, repositioning public relations practitioners as "ethical guardians" who provide the necessary moral oversight to automated systems. Ultimately, the study concludes that restoring public trust requires a structural reintegration of human conscience into the algorithmic decision-making process to ensure that digital public spheres remain accountable and humanity-oriented.
Copyrights © 2026