p-Index From 2021 - 2026
0.408
P-Index
This Author published in this journals
All Journal Toplama
Claim Missing Document
Check
Articles

Found 2 Documents
Search

A CONCEPTUAL FRAMEWORK FOR AI SELF-HEALING FOR BIAS MITIGATION: A PROACTIVE ARCHITECTURAL PROPOSAL Harianja, Harianja; Syam, Elgamar; Wahab, Alawiyah Abd; Ibrahim, Huda; Awang, Hapini; Mansor, Nur Suhaili; Sidik, Adi Permana
TOPLAMA Vol. 3 No. 2 (2026): TOPLAMA
Publisher : PT Altin Riset Publishing

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.61397/tla.v3i2.509

Abstract

As the adoption of Artificial Intelligence (AI) continues to expand across various sectors, the issue of bias in training data has emerged as a significant ethical and technical challenge. AI systems are commonly trained using large-scale datasets collected from digital environments such as the internet, social media, and public databases. These datasets often contain historical inequalities, stereotypes, and unbalanced representations of certain demographic groups. Consequently, AI models may unintentionally replicate and amplify these biases in their predictions or decisions. This situation becomes particularly concerning when AI is used in high-stakes domains such as recruitment, healthcare, financial services, and public policy. Most existing bias mitigation strategies rely on reactive approaches, such as adjusting model outputs or modifying datasets after bias has already been identified. While these methods can reduce certain forms of discrimination, they often require significant manual intervention and may not effectively address bias in dynamic data environments. This research proposes a conceptual framework for an AI self-healing system designed to autonomously detect and correct bias in training data before it influences model outcomes. The proposed framework integrates four key modules: Data Monitoring, Bias Analysis, Automated Bias Correction, and a Feedback Loop and Validation mechanism. Together, these components create a continuous workflow that allows the system to identify bias patterns, apply corrective strategies, and verify fairness before data is used for model training. This framework offers a proactive and sustainable approach to bias mitigation while supporting the development of more ethical, robust, and accountable AI systems.
MOBILE SECURITY PRACTICES AMONG MALAYSIAN CITIZENS: A SURVEY OF RISK AWARENESS AND PROTECTIVE BEHAVIORS QIN, QIN; Mansor, Nur Suhaili; Awang, Hapini; Ibrahim, Huda; Sidik, Adi Permana
TOPLAMA Vol. 3 No. 2 (2026): TOPLAMA
Publisher : PT Altin Riset Publishing

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.61397/tla.v3i2.512

Abstract

Mobile phones have become an important part of Malaysian youth's daily lives, and as easy as it is to get our hands on a mobile, the growing issues that follow raise alarms for cybersecurity. WordPress site. Although there has been a recent influx of online scam and privacy breach reports, little still exists on how ordinary users perceive mobile security, or if their behaviors are consistent with good risk mitigation practices. This study aims to examine the relationships that exist among mobile security confidence, user practices, and incident experiences with the gender of Malaysian youth. Using an eight-sectioned structured survey as an evaluation instrument, key findings are mainly gleaned from users who report medium levels of confidence in their mobile malware knowledge and overall adoption of protective behaviors such as keeping software updated, only using trusted sources to download apps, and using biometrics. A total of 38.1% of respondents, meanwhile, admitted to previous security incidents largely borne out of the presence of outdated systems and a lack of oversight elements in place for vetting third-party applications. in a rapidly digitizing society. Yet without habitual practice and supportive system design, awareness alone may not be enough.