Hapsari Puspita Rini
Unknown Affiliation

Published : 1 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 1 Documents
Search

PSYCHOLOGICAL MECHANISMS BEHIND THE ACCEPTANCE OF DEEPFAKE-BASED HUMOR AND DIGITAL HARASSMENT Kurrota Aini; Vidya Nindhita; Hapsari Puspita Rini
Multidisciplinary Indonesian Center Journal (MICJO) Vol. 2 No. 4 (2025): Vol. 2 No. 4 Edisi Oktober 2025
Publisher : PT. Jurnal Center Indonesia Publisher

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62567/micjo.v2i4.2093

Abstract

The objective of this study was to examine how deepfake-based humor becomes socially acceptable despite its potential to function as digital harassment. This study focused on psychological mechanisms that explain audience tolerance and normalization of harmful, identity-based humorous content in online environments. This study used a scoping review design to map and synthesize existing research across psychology, media studies, and cyberpsychology. The sources were identified through searches in major academic databases and were selected based on their relevance to deepfake technology, digital humor, online harassment, and psychological processes such as moral disengagement, online disinhibition, empathy reduction, and social norm reinforcement. The results indicate that acceptance of deepfake-based humor is commonly supported by four interrelated mechanisms, namely normalization through participatory digital culture, psychological distancing that weakens empathy, moral ambiguity created by humorous framing, and reduced accountability through diffusion of responsibility in online spaces. In addition, the literature conceptualizes deepfake humor as a hybrid phenomenon situated between remix-based entertainment and identity-targeting harm, shaped by platform visibility and engagement dynamics. This review highlights that deepfake-based humor may be tolerated not because it is harmless, but because it is routinely framed as “just a joke,” making its harm easier to minimize and socially overlook. Therefore, this study emphasizes the need for more direct empirical research and stronger interventions to prevent deepfake-based humor from becoming a normalized form of digital harassment in increasingly synthetic digital environments.