Claim Missing Document
Check
Articles

Found 3 Documents
Search

A CONCEPTUAL FRAMEWORK FOR AI SELF-HEALING FOR BIAS MITIGATION: A PROACTIVE ARCHITECTURAL PROPOSAL Harianja, Harianja; Syam, Elgamar; Wahab, Alawiyah Abd; Ibrahim, Huda; Awang, Hapini; Mansor, Nur Suhaili; Sidik, Adi Permana
TOPLAMA Vol. 3 No. 2 (2026): TOPLAMA
Publisher : PT Altin Riset Publishing

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.61397/tla.v3i2.509

Abstract

As the adoption of Artificial Intelligence (AI) continues to expand across various sectors, the issue of bias in training data has emerged as a significant ethical and technical challenge. AI systems are commonly trained using large-scale datasets collected from digital environments such as the internet, social media, and public databases. These datasets often contain historical inequalities, stereotypes, and unbalanced representations of certain demographic groups. Consequently, AI models may unintentionally replicate and amplify these biases in their predictions or decisions. This situation becomes particularly concerning when AI is used in high-stakes domains such as recruitment, healthcare, financial services, and public policy. Most existing bias mitigation strategies rely on reactive approaches, such as adjusting model outputs or modifying datasets after bias has already been identified. While these methods can reduce certain forms of discrimination, they often require significant manual intervention and may not effectively address bias in dynamic data environments. This research proposes a conceptual framework for an AI self-healing system designed to autonomously detect and correct bias in training data before it influences model outcomes. The proposed framework integrates four key modules: Data Monitoring, Bias Analysis, Automated Bias Correction, and a Feedback Loop and Validation mechanism. Together, these components create a continuous workflow that allows the system to identify bias patterns, apply corrective strategies, and verify fairness before data is used for model training. This framework offers a proactive and sustainable approach to bias mitigation while supporting the development of more ethical, robust, and accountable AI systems.
MOBILE SECURITY PRACTICES AMONG MALAYSIAN CITIZENS: A SURVEY OF RISK AWARENESS AND PROTECTIVE BEHAVIORS QIN, QIN; Mansor, Nur Suhaili; Awang, Hapini; Ibrahim, Huda; Sidik, Adi Permana
TOPLAMA Vol. 3 No. 2 (2026): TOPLAMA
Publisher : PT Altin Riset Publishing

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.61397/tla.v3i2.512

Abstract

Mobile phones have become an important part of Malaysian youth's daily lives, and as easy as it is to get our hands on a mobile, the growing issues that follow raise alarms for cybersecurity. WordPress site. Although there has been a recent influx of online scam and privacy breach reports, little still exists on how ordinary users perceive mobile security, or if their behaviors are consistent with good risk mitigation practices. This study aims to examine the relationships that exist among mobile security confidence, user practices, and incident experiences with the gender of Malaysian youth. Using an eight-sectioned structured survey as an evaluation instrument, key findings are mainly gleaned from users who report medium levels of confidence in their mobile malware knowledge and overall adoption of protective behaviors such as keeping software updated, only using trusted sources to download apps, and using biometrics. A total of 38.1% of respondents, meanwhile, admitted to previous security incidents largely borne out of the presence of outdated systems and a lack of oversight elements in place for vetting third-party applications. in a rapidly digitizing society. Yet without habitual practice and supportive system design, awareness alone may not be enough.
Cybersecurity Risk Detection Based on Roblox User Review Analysis Using TF-IDF and Comparison of Naïve Bayes and Support Vector Machine Alam, RG Guntur; Ibrahim, Huda
Jurnal Teknik Informatika (Jutif) Vol. 7 No. 2 (2026): JUTIF Volume 7, Number 2, April 2026
Publisher : Informatika, Universitas Jenderal Soedirman

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.52436/1.jutif.2026.7.2.5582

Abstract

The rapid growth of online gaming platforms increases user engagement while also exposing users to technical and cybersecurity risks. User reviews represent a rich yet underutilized textual source that can serve as early indicators of such risks. Unlike prior studies focused on sentiment polarity, this study positions user reviews as early cybersecurity risk signals by mapping complaint patterns into operational security risk categories relevant to system developers. This study compares Naïve Bayes (NB) and Support Vector Machine (SVM) in detecting cybersecurity risks from imbalanced textual data derived from Roblox user reviews. A total of 3,000 reviews were collected from the Google Play Store via web scraping and preprocessed using case folding, normalization, tokenization, stopword removal, and stemming. Reviews were classified into four cybersecurity risk categories (account access issues, suspicious behavior, connection instability, and data loss) based on rule-based security keyword mapping. Text representation employed TF-IDF with unigram and bigram features, while class imbalance was handled through undersampling. Model evaluation used three train–test splits (80:20, 70:30, and 60:40) and was assessed using Accuracy, Macro F1-score, AUC-PR, training time, and statistical testing. Results show that SVM consistently outperforms Naïve Bayes, achieving higher accuracy (0.86–0.88) and substantially better Macro F1-scores (0.73–0.77), indicating more balanced detection of minority cybersecurity risks. These differences are statistically significant (p < 0.05). The novelty of this study lies in transforming user reviews into a structured cybersecurity risk detection framework and empirically demonstrating the robustness of SVM in identifying rare but critical risks from imbalanced data.