Monem, Nadine
Unknown Affiliation

Published : 2 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 2 Documents
Search

Authenticity in Biased Diversity: Investigating the Language of Prompt Performances in AI Image Generators Tansri, Farrah Faustine; Monem, Nadine; Weinberg, Lee
Journal of Aesthetics, Creativity and Art Management Vol. 4 No. 1 (2025): Journal of Aesthetics, Creativity and Art Management
Publisher : Institut Seni Indonesia Denpasar

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.59997/jacam.v4i1.5414

Abstract

This paper reanalysis discriminatory biasness and ethicality in effects of artificial intelligent productions over the last decade of associated news reports. The rise of AI generators and generated contents has been increasing since the last two years after text-to-image models in deep learning machineries were reintroduced in 2015. This significantly brought volumes of positive public responses to the new discoveries, but this study however focuses more on evaluating overlooked biased problems interlinking the endangerment of the human positionality within artificial production. Performing under the theoretical framework of Marxism, the case study chosen, centres around the specific theories of machine replacing human labour in “Capital”’ chapter 15. Incorporating reviews, news reports, articles, interviews and other means of secondary resources, the secondary qualitative researches will contextualize a past case-study before the analysis and accompanied comparative experimentations. The cross-case analysis concludes a strong correlation between human bias to the artificial intelligences’ biased judgements that leads to more debate towards the safety of all participants and participatory data the machine learns from. The results however prove its limitations within the grounds of circumstantially supported instead of factually evidenced due to the lack of primary research aside from the experimental examples. Subjectively, what are in need of improvements are the supervision of data used as the source of learning and classification, which to this day are mostly unmonitored scraps of data from the wide web.
Authenticity in Biased Diversity: Investigating the Language of Prompt Performances in AI Image Generators Tansri, Farrah Faustine; Monem, Nadine; Weinberg, Lee
Journal of Aesthetics, Creativity and Art Management Vol. 4 No. 1 (2025): Journal of Aesthetics, Creativity and Art Management
Publisher : Institut Seni Indonesia Bali

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.59997/jacam.v4i1.5414

Abstract

This paper reanalysis discriminatory biasness and ethicality in effects of artificial intelligent productions over the last decade of associated news reports. The rise of AI generators and generated contents has been increasing since the last two years after text-to-image models in deep learning machineries were reintroduced in 2015. This significantly brought volumes of positive public responses to the new discoveries, but this study however focuses more on evaluating overlooked biased problems interlinking the endangerment of the human positionality within artificial production. Performing under the theoretical framework of Marxism, the case study chosen, centres around the specific theories of machine replacing human labour in “Capital”’ chapter 15. Incorporating reviews, news reports, articles, interviews and other means of secondary resources, the secondary qualitative researches will contextualize a past case-study before the analysis and accompanied comparative experimentations. The cross-case analysis concludes a strong correlation between human bias to the artificial intelligences’ biased judgements that leads to more debate towards the safety of all participants and participatory data the machine learns from. The results however prove its limitations within the grounds of circumstantially supported instead of factually evidenced due to the lack of primary research aside from the experimental examples. Subjectively, what are in need of improvements are the supervision of data used as the source of learning and classification, which to this day are mostly unmonitored scraps of data from the wide web.