This paper reanalysis discriminatory biasness and ethicality in effects of artificial intelligent productions over the last decade of associated news reports. The rise of AI generators and generated contents has been increasing since the last two years after text-to-image models in deep learning machineries were reintroduced in 2015. This significantly brought volumes of positive public responses to the new discoveries, but this study however focuses more on evaluating overlooked biased problems interlinking the endangerment of the human positionality within artificial production. Performing under the theoretical framework of Marxism, the case study chosen, centres around the specific theories of machine replacing human labour in “Capital”’ chapter 15. Incorporating reviews, news reports, articles, interviews and other means of secondary resources, the secondary qualitative researches will contextualize a past case-study before the analysis and accompanied comparative experimentations. The cross-case analysis concludes a strong correlation between human bias to the artificial intelligences’ biased judgements that leads to more debate towards the safety of all participants and participatory data the machine learns from. The results however prove its limitations within the grounds of circumstantially supported instead of factually evidenced due to the lack of primary research aside from the experimental examples. Subjectively, what are in need of improvements are the supervision of data used as the source of learning and classification, which to this day are mostly unmonitored scraps of data from the wide web.
Copyrights © 2025