Ahmed Al-Zakhali, Omar
Unknown Affiliation

Published : 2 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 2 Documents
Search

Comparative Analysis of Machine Learning and Deep Learning Models for Bitcoin Price Prediction Ahmed Al-Zakhali, Omar; Abdulazeez, Adnan M.
The Indonesian Journal of Computer Science Vol. 13 No. 1 (2024): The Indonesian Journal of Computer Science (IJCS)
Publisher : AI Society & STMIK Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.33022/ijcs.v13i1.3722

Abstract

This research endeavors to forecast Bitcoin prices by employing a suite of machine learning and deep learning models. Five distinct models were employed: Random Forest, Linear Regression, Support Vector Machine (SVM), Long Short-Term Memory (LSTM), and Gated Recurrent Unit (GRU), each evaluated based on their R-squared scores. Notably, the models showcased diverse performances, with the ensemble learning approach of Random Forest exhibiting near-perfect accuracy, closely followed by GRU and SVM. The deep learning architectures, LSTM and GRU, demonstrated remarkable predictive capabilities, showcasing their adeptness in capturing intricate temporal patterns within the cryptocurrency price data. This study sheds light on the comparative performance of these models, emphasizing their strengths and limitations in predicting Bitcoin prices.
Comparative Analysis of XGBoost Performance for Text Classification with CPU Parallel and Non-Parallel Processing Ahmed Al-Zakhali, Omar; Zeebaree, Subhi; Askar, Shavan
The Indonesian Journal of Computer Science Vol. 13 No. 2 (2024): The Indonesian Journal of Computer Science (IJCS)
Publisher : AI Society & STMIK Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.33022/ijcs.v13i2.3798

Abstract

This paper shows the findings of a study that looks at how CPU parallel processing changes the way Extreme Gradient Boosting (XGBoost) classifies text. XGBoost models can sort news stories into set groups faster and more accurately, with or without CPU parallelism. This is the main goal of the study. The Keras dataset is used to prepare the text so that the TF-IDF (Term Frequency-Inverse Document Frequency) features can be found. These features will then be used to train the XGBoost model. This is used to check out two different kinds of the XGBoost classifier. There is parallelism between one of them and not it in the other. How well the model works can be observed by how accurate it is. This includes both how long it takes to learn and estimate and how well predictions work. The models take very different amounts of time to compute, but they are all pretty close in terms of how accurate they are. Parallel processing on the CPU has made tasks proceed more rapidly, and XGBoost is now better at making the most of that speed to do its task. The purpose of the study is to show that parallel processing can speed up XGBoost models without affecting their accuracy. This is helpful for putting text into categories.