This study explores the mechanisms of algorithmic bias in the curation of political content on the Twitter/X platform through the lens of Machine Learning (ML). Amidst increasing global polarization, recommendation algorithms are frequently accused of facilitating the creation of echo chambers. This paper highlights how the objective functions of ML models, specifically the maximization of user engagement, inadvertently amplify extremist and partisan content. Utilizing a systematic literature review, the research identifies that bias originates not only from training data (data bias) but also from architectural reinforcement mechanisms (reinforcement bias). The findings suggest that the interaction between user behavior and algorithmic feedback loops creates a self-perpetuating cycle of polarization. This study contributes a technical mapping of how collaborative filtering and deep learning algorithms contribute to the fragmentation of the digital public sphere. The results are intended to serve as a foundational framework for developers and regulators in designing curation systems that are more transparent and politically neutral.
Copyrights © 2024