Abdullah, Zubaile
Unknown Affiliation

Published : 4 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 3 Documents
Search
Journal : JOIV : International Journal on Informatics Visualization

Hybrid Logistic Regression Random Forest on Predicting Student Performance Rohman, Muhammad Ghofar; Abdullah, Zubaile; Kasim, Shahreen; Rasyidah, -
JOIV : International Journal on Informatics Visualization Vol 9, No 2 (2025)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.9.2.3972

Abstract

The research aims to investigate the effects of unbalanced data on machine learning, overcome imbalanced data using SMOTE oversampling, and improve machine learning performance using hyperparameter tuning. This study proposed a model that combines logistic regression and random forests as a hybrid logistic regression, random forest, and random search SV that uses SMOTE oversampling and hyperparameter tuning. The result of this study showed that the prediction model using the hybrid logistic regression, random forest, and random search SV that we proposed produces more effective performance than using logistic regression and random forest, with accuracy, precision, recall, and F1-score of 0.9574, 0.9665, 0.9576. This can contribute to a practical model to address imbalanced data classification based on data-level solutions for student performance prediction.
Comparative Analysis of Machine Learning Algorithms for Cross-Site Scripting (XSS) Attack Detection Hamzah, Khairatun Hisan; Osman, Mohd Zamri; Anthony, Tumusiime; Ismail, Mohd Arfian; Abdullah, Zubaile; Alanda, Alde
JOIV : International Journal on Informatics Visualization Vol 8, No 3-2 (2024): IT for Global Goals: Building a Sustainable Tomorrow
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.8.3-2.3451

Abstract

Cross-Site Scripting (XSS) attacks pose a significant cybersecurity threat by exploiting vulnerabilities in web applications to inject malicious scripts, enabling unauthorized access and execution of malicious code. Traditional XSS detection systems often struggle to identify increasingly complex XSS payloads. To address this issue, this research evaluated the efficacy of Machine Learning algorithms in detecting XSS threats within online web applications. The study conducts a comprehensive comparative analysis of XSS attack detection using four prominent Machine Learning algorithms, which consist of Extreme Gradient Boosting (XGBoost), Random Forest (RF), K-Nearest Neighbors (KNN), and Support Vector Machine (SVM). This research utilizes a comparative methodology to assess the selected Machine Learning algorithms by analyzing their performance metrics, including confusion matrix, 10-fold cross-validation, and assessment of training time to thoroughly evaluate the models. By exploring dataset characteristics and evaluating the performance metrics of each selected algorithm, the study determined the most robust Machine Learning solution for XSS detection. Results indicate that Random Forest is the top performer, achieving 99.93% accuracy and balanced metrics across all criteria evaluated. These findings will significantly enhance web application security by providing reliable defenses against evolving XSS threats.
A Multi-tier Model and Filtering Approach to Detect Fake News Using Machine Learning Algorithms Chang Yu, Chiung; A Hamid, Isredza Rahmi; Abdullah, Zubaile; Kipli, Kuryati; Amnur, Hidra
JOIV : International Journal on Informatics Visualization Vol 8, No 2 (2024)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.8.2.2703

Abstract

Fake news trends have overgrown in our societies over the years through social media platforms. The goal of spreading fake news can easily mislead and manipulate the public’s opinion. Many previous researchers have proposed this domain using classification algorithms or deep learning techniques. However, machine learning algorithms still suffer from high margin error, which makes them unreliable as every algorithm uses a different way of prediction. Deep learning requires high computation power and a large dataset to operate the classification model. A filtering model with a consensus layer in a multi-tier model is introduced in this research paper. The multi-tier model filters the news label correctly predicted by the first two-tier layer. The consensus layer acts as the final decision when collision results occur in the first two-tier layer. The proposed model is applied to the WEKA software tool to test and evaluate the model from both datasets. Two sequences of classification models are used in this research paper: LR_DT_RF and LR_NB_AdaBoost. The best performance of sequence for both datasets is LR_DT_RF which yields 0.9892 F1-Score, 0.9895 Accuracy, and 0.9790 Matthews Correlation Coefficient (MCC) for ISOT Fake News Dataset, and 0.9913 F1-Score, 0.9853 Accuracy, and 0.9455 MCC for CHECKED Dataset. This research could give researchers an approach for fake news detection on different social platforms and feature-based