The rapid proliferation of misinformation across digital platforms has emerged as a critical challenge, undermining public trust, shaping opinion, and eroding the quality of democratic discourse. As reliance on digital platforms for information dissemination intensifies, the regulation of online content has become a focal issue for scholars, policymakers, and platform operators. This study examines the intersection of cyber law, platform governance, and content moderation, analyzing how platforms manage misinformation while balancing freedom of expression. Employing a normative and socio-legal approach, the research utilizes comparative methodology to assess national and international cyber law frameworks, alongside case studies of platforms such as Facebook, YouTube, and Twitter. Public policy analysis is conducted to evaluate the effectiveness of current governance models. Findings reveal significant variation in regulatory responses, with the European Union’s Digital Services Act offering a robust framework but facing enforcement challenges. Platforms, acting as non-state regulators, are criticized for inconsistent moderation and limited transparency. The study concludes that hybrid regulatory models—combining state intervention with platform self-regulation—hold promise for addressing misinformation effectively while safeguarding digital rights. This research contributes to ongoing debates on balancing free speech, accountability, and social control in the digital age.