Claim Missing Document
Check
Articles

Found 10 Documents
Search

Increasing Trust in AI with Explainable Artificial Intelligence (XAI): A Literature Review Nasien, Dewi; Adiya, M. Hasmil; Anggara, Devi Willeam; Baharum, Zirawani; Yacob, Azliza; Rahmadhani, Ummi Sri
Journal of Applied Business and Technology Vol. 5 No. 3 (2024): Journal of Applied Business and Technology
Publisher : Institut Bisnis dan Teknologi Pelita Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35145/jabt.v5i3.193

Abstract

Artificial Intelligence (AI) is one of the most versatile technologies ever to exist so far. Its application spans as wide as the mind can imagine: science, art, medicine, business, law, education, and more. Although very advanced, AI lacks one key aspect that makes its contribution to specific fields often limited, which is transparency. As it grows in complexity, the programming of AI is becoming too complex to comprehend, thus making its process a “black box” in which humans cannot trace how the result came about. This lack of transparency makes AI not auditable, unaccountable, and untrustworthy. With the development of XAI, AI can now play a more significant role in regulated and complex domains. For example, XAI improves risk assessment in finance by making credit evaluation transparent. An essential application of XAI is in medicine, where more clarity of decision-making increases reliability and accountability in diagnosis tools. Explainable Artificial Intelligence (XAI) bridges this gap. It is an approach that makes the process of AI algorithms comprehensible for people. Explainable Artificial Intelligence (XAI) is the bridge that closes this gap. It is a method that unveils the process behind AI algorithms comprehensibly to humans. This allows institutions to be more responsible in developing AI and for stakeholders to put more trust in AI. Owing to the development of XAI, the technology can now further its contributions in legally regulated and deeply profound fields.
Optimization of Body Mass Index Classification Using Machine Learning Approach for Early Detection of Obesity Risk Nasien, Dewi; Owen, Steven; Fenly, Fenly; Johanes, Johanes; Lombu, Frendly; Leo, Leo; Baharum, Zirawani
Journal of Applied Business and Technology Vol. 6 No. 3 (2025): Journal of Applied Business and Technology
Publisher : Institut Bisnis dan Teknologi Pelita Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35145/jabt.v6i3.201

Abstract

This study aims to optimize the classification of obesity risk at an early stage using Principal Component Analysis (PCA), which is an important technique in machine learning. PCA is used to reduce the dimensionality of data, maintain important information without losing data, and has the advantage of reducing complexity which usually increases the risk of overfitting. The obesity dataset will be classified using algorithms such as K-Nearest Neighbor (KNN), Support Vector Machine (SVM), Decision Tree, Random Forest, Gradient Boosting Linear, and XGBoost. Specifically, each algorithm is chosen because of its respective advantages: KNN for nonlinear data, SVM for high-dimensional data, and Random Forest and XGBoost for complex data patterns. Evaluation is carried out using metrics such as accuracy, precision, recall, and F1-score to assess the performance of the algorithm. The results show that the Random Forest and XGBoost algorithms provide the best performance in terms of accuracy, especially when all dataset features are used without PCA reduction. This study is expected to be a consideration in determining the best algorithm for obesity classification, supporting early detection, and facilitating decision making in health analysis.
Predictive Analytics for Employability in Malaysian TVET with a Hybrid of Regression and Clustering Methods Mahdin, Hairulnizam; Nurwarsito, Heru; Baharum, Zirawani; Kamri, Khairol Anuar; Hassan, Azman; Haw, Su-Cheng; Arshad, Mohammad Syafwan
JOIV : International Journal on Informatics Visualization Vol 9, No 5 (2025)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.9.5.4516

Abstract

Graduate employability remains a high concern for Technical and Vocational Education and Training (TVET) institutions, particularly within Malaysia’s Technical University Network (MTUN), where producing industry-ready graduates is a central goal. While machine learning has transformed fields like healthcare and finance, its application in vocational education remains underexplored—particularly for employability prediction. This study addresses this gap by hybridizing decision trees and clustering to uncover non-linear patterns in student survey data. Guided by Human Capital Theory and SERVQUAL, which inform variable selection (e.g., technical skills as productivity investments), this study integrates multiple linear regression, decision tree regression, and K-Means clustering to identify significant predictors and uncover latent student groupings. Using a publicly available dataset of Likert-scale responses from MTUN students, technical skills and supervisory support consistently emerged as the most impactful employability predictors. Communication showed moderate influence, while training delivery and problem-solving exhibited variable effects depending on the modelling approach. Unlike regression, decision trees revealed non-linear interaction thresholds. For example, students with SVR < 3.5 and TS < 4.0 had 40% lower employability scores, suggesting targeted mentoring could yield disproportionate improvements. Clustering revealed three distinct student profiles, which could support data-driven interventions. This hybrid framework demonstrates the potential for integrating machine learning into institutional analytics for proactive support of employability.
Applying Deep Learning Models to Breast Ultrasound Images for Automating Breast Cancer Diagnosis Khaleefah, Shihab Hamad; Lojungin, Eva Cabrini; Mostafa, Salama A.; Baharum, Zirawani; Aldulaimi, Mohammed Hasan; Ghazal, Taher M.; Alo, Salam Omar; Hidayat, Rahmat
JOIV : International Journal on Informatics Visualization Vol 8, No 3-2 (2024): IT for Global Goals: Building a Sustainable Tomorrow
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.8.3-2.1912

Abstract

Breast cancer is a result of uncontrolled human cell division. The vast growth of breast cancer patients has been an issue worldwide. Most of the patients are women, but breast cancer also affects men with a much lesser percentage. Breast cancer might lead to death for those who are suffering from it. Numerous types of research have been done to make an early diagnosis of breast cancer. It has been proven that the tumor can be detected by using an ultrasound image. Artificial Intelligence techniques have been used to detect breast cancer fundamentally. This paper studies the effectiveness of deep learning (DL) techniques in automating breast cancer diagnosis. Subsequently, the paper evaluates the diagnosis performance of three DL models utilizing the criteria of accuracy, recall, precision, and f1-score. The Densenet-169, U-Net, and ConvNet DL models are selected based on the examination of the related work. The DL diagnosis process involves identifying two types of breast cancer tumors: benign and malignant. The evaluation outcomes of the DL models show that the most effective model for diagnosing breast cancer among the three is the ConvNet, which achieves an accuracy of 91%, a recall of 83%, a precision of 85%, and an F1-score of 83%.
Software Agent Simulation Design on the Efficiency of Food Delivery Ismail, Shahrinaz; Mostafa, Salama A; Baharum, Zirawani; Erianda, Aldo; Jaber, Mustafa Musa; Jubair, Mohammed Ahmed; Adiya, M. Hasmil
JOIV : International Journal on Informatics Visualization Vol 8, No 1 (2024)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.8.1.2648

Abstract

Food delivery services have gained popularity since the emergence of online food delivery. Since the recent pandemic, the demand for service has increased tremendously. Due to several factors that affect how much time additional riders spend on the road; food delivery companies have no control over the location or timing of the delivery riders. There is a need to study and understand the food delivery riders' efficiency to estimate the service system's capacity. The study can ensure that the capacity is sufficient based on the number of orders, which usually depends on the number of potential customers within a territory and the time each rider takes to deliver the orders successfully. This study is an opportunity to focus on the efficiency of the riders since there is not much work at the operational level of the food delivery structure. This study takes up the opportunity to design a software agent simulation on the efficiency of riders' operations in food service due to the lack of simulation to predict this perspective, which could be extended to efficiency prediction. The results presented in this paper are based on the system design phase using the Tropos methodology. At movement in the simulation, the graph of the efficiency is calculated. Upon crossing the threshold, it is considered that the rider agents have achieved the efficiency rate required for decision-making. The simulation's primary operations depend on frontline remotely mobile workers like food delivery riders. It can benefit relevant organizations in decision-making during strategic capacity planning.
Recent issues of elderly intergenerational instructional strategies: a scoping review Ali, Muhammad Asri Mohd; Ahmad, Nahdatul Akma; Ariff, Mohamed Imran Mohamed; Alias, Nursyahidah; Baharum, Zirawani; Shahdan, Tengku Shahrom Tengku
Journal of Education and Learning (EduLearn) Vol 18, No 3: August 2024
Publisher : Intelektual Pustaka Media Utama

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/edulearn.v18i3.21730

Abstract

This scoping review investigates instructional strategies implemented in recent studies to enhance the digital application usage experience for the elderly, addressing emerging issues in the context of a rapidly aging global population. With the World Health Organization predicting a significant increase in the proportion of individuals aged 60 years and above by 2030, the imperative for digital literacy among the elderly becomes crucial. The review, drawing from 14 eligible articles sourced from Web of Science and Scopus, categorizes findings into two main themes: i) intergenerational strategies of instruction and ii) contemporary issues associated with intergenerational approaches. By exploring these dimensions, the paper provides valuable insights for researchers seeking to understand and tackle current challenges in instructing the elderly on digital applications, contributing to the ongoing discourse on improving the quality of life for the aging population through digital technology.
A Nested Monte Carlo Simulation Model for Enhancing Dynamic Air Pollution Risk Assessment Hassan, Mustafa Hamid; Mostafa, Salama A.; Baharum, Zirawani; Mustapha, Aida; Saringat, Mohd Zainuri; Afyenni, Rita
JOIV : International Journal on Informatics Visualization Vol 6, No 4 (2022)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.6.4.1228

Abstract

The risk assessment of air pollution is an essential matter in the area of air quality computing. It provides useful information supporting air quality (AQ) measurement and pollution control. The outcomes of the evaluation have societal and technical influences on people and decision-makers. The existing air pollution risk assessment employs different qualitative and quantitative methods. This study aims to develop an AQ-risk model based on the Nested Monte Carlo Simulation (NMCS) and concentrations of several air pollutant parameters for forecasting daily AQ in the atmosphere. The main idea of NMCS lies in two main parts, which are the Outer and Inner parts. The Outer part interacts with the data sources and extracts a proper sampling from vast data. It then generates a scenario based on the data samples. On the other hand, the Inner part handles the assessment of the processed risk from each scenario and estimates future risk. The AQ-risk model is tested and evaluated using real data sources representing crucial pollution. The data is collected from an Italian city over a period of one year. The performance of the proposed model is evaluated based on statistical indices, coefficient of determination (R2), and mean square error (MSE). R2 measures the prediction ability in the testing stage for both parameters, resulting in 0.9462 and 0.9073 prediction accuracy. Meanwhile, MSE produced average results of 9.7 and 10.3, denoting that the AQ-risk model provides a considerably high prediction accuracy.
Deep Learning Approach for Prediction of Brain Tumor from Small Number of MRI Images Zailan, Zulaikha N.I.; Mostafa, Salama A.; Abdulmaged, Alyaa Idrees; Baharum, Zirawani; Jaber, Mustafa Musa; Hidayat, Rahmat
JOIV : International Journal on Informatics Visualization Vol 6, No 2-2 (2022): A New Frontier in Informatics
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.6.2-2.987

Abstract

Daily, the computer industry has been moving towards machine intelligence. Deep learning is a subfield of artificial intelligence (AI)'s machine learning (ML). It has AI features that mimic the functioning of the human brain in analyzing data and generating patterns for making decisions. Deep learning is gaining much attention nowadays because of its superior precision when trained with large data. This study uses the deep learning approach to predict brain tumors from medical images of magnetic resonance imaging (MRI). This study is conducted based on CRISP-DM methodology using three deep learning algorithms: VGG-16, Inception V3, MobileNet V2, and implemented by the Python platform. The algorithms predict a small number of MRI medical images since the dataset has only 98 image samples of benign and 155 image samples of malignant brain tumors. Subsequently, the main objective of this work is to identify the best deep learning algorithm that performs on small-sized datasets. The performance evaluation results are based on the confusion matrix criteria, accuracy, precision, and recall, among others. Generally, the classification results of the MobileNet-V2 tend to be higher than the other models since its recall value is 86.00%. For Inception-V3, it got the second highest accuracy, 84.00%, and the lowest accuracy is VGG-16 since it got 79.00%. Thus, in this work, we show that DL technology in the medical field can be more advanced and easier to predict brain tumors, even with a small dataset.
Combining Deep Learning Models for Enhancing the Detection of Botnet Attacks in Multiple Sensors Internet of Things Networks Hezam, Abdulkareem A.; Mostafa, Salama A.; Baharum, Zirawani; Alanda, Alde; Salikon, Mohd Zaki
JOIV : International Journal on Informatics Visualization Vol 5, No 4 (2021)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.5.4.733

Abstract

Distributed-Denial-of-Service impacts are undeniably significant, and because of the development of IoT devices, they are expected to continue to rise in the future. Even though many solutions have been developed to identify and prevent this assault, which is mainly targeted at IoT devices, the danger continues to exist and is now larger than ever. It is common practice to launch denial of service attacks in order to prevent legitimate requests from being completed. This is accomplished by swamping the targeted machines or resources with false requests in an attempt to overpower systems and prevent many or all legitimate requests from being completed. There have been many efforts to use machine learning to tackle puzzle-like middle-box problems and other Artificial Intelligence (AI) problems in the last few years. The modern botnets are so sophisticated that they may evolve daily, as in the case of the Mirai botnet, for example. This research presents a deep learning method based on a real-world dataset gathered by infecting nine Internet of Things devices with two of the most destructive DDoS botnets, Mirai and Bashlite, and then analyzing the results. This paper proposes the BiLSTM-CNN model that combines Bidirectional Long-Short Term Memory Recurrent Neural Network and Convolutional Neural Network (CNN). This model employs CNN for data processing and feature optimization, and the BiLSTM is used for classification. This model is evaluated by comparing its results with three standard deep learning models of CNN, Recurrent Neural Network (RNN), and long-Short Term Memory Recurrent Neural Network (LSTM–RNN). There is a huge need for more realistic datasets to fully test such models' capabilities, and where N-BaIoT comes, it also includes multi-device IoT data. The N-BaIoT dataset contains DDoS attacks with the two of the most used types of botnets: Bashlite and Mirai. The 10-fold cross-validation technique tests the four models. The obtained results show that the BiLSTM-CNN outperforms all other individual classifiers in every aspect in which it achieves an accuracy of 89.79% and an error rate of 0.1546 with a very high precision of 93.92% with an f1-score and recall of 85.73% and 89.11%, respectively. The RNN achieves the highest accuracy among the three individual models, with an accuracy of 89.77%, followed by LSTM, which achieves the second-highest accuracy of 89.71%. CNN, on the other hand, achieves the lowest accuracy among all classifiers of 89.50%.
Vehicles Speed Estimation Model from Video Streams for Automatic Traffic Flow Analysis Systems Arriffin, Maizatul Najihah; Mostafa, Salama A.; Khattak, Umar Farooq; Jaber, Mustafa Musa; Baharum, Zirawani; Defni, -; Gusman, Taufik
JOIV : International Journal on Informatics Visualization Vol 7, No 2 (2023)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.7.2.1820

Abstract

Image and video processing have been widely used to provide traffic parameters, which will be used to improve certain areas of traffic operations. This research aims to develop a model for estimating vehicle speed from video streams to support traffic flow analysis (TFA) systems. Subsequently, this paper proposes a vehicle speed estimation model with three main stages of achieving speed estimation: (1) pre-processing, (2) segmentation, and (3) speed detection. The model uses a bilateral filter in the pre-processing strategy to provide free-shadow image quality and sharpen the image. Gaussian filter and active contour are used to detect and track objects of interest in the image. The Pinhole model is used to assess the real distance of the item within the image sequence for speed estimation. Kalman filter and optical flow are used to flatten vehicle speed and acceleration uncertainties. This model is evaluated with a dataset that consists of video recordings of moving vehicles at traffic light junctions on the urban roadway. The average percentage for speed estimation error is 20.86%. The average percentage for accuracy obtained is 79.14%, and the overall average precision of 0.08.