Claim Missing Document
Check
Articles

Found 2 Documents
Search

Facial Beauty Standards Predictions Based on Machine Learning: A Comparative Analysis Sadiq, Bareen Haval; Abdulazeez , Adnan M.
The Indonesian Journal of Computer Science Vol. 13 No. 1 (2024): The Indonesian Journal of Computer Science (IJCS)
Publisher : AI Society & STMIK Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.33022/ijcs.v13i1.3709

Abstract

This study uses a variety of machine learning and classification methods to anticipate the Facial Beauty Standards. The Accuracy of five different models—Random Forest, Logistic Regression, Support Vector Machine (SVM), KNN, and decision tree—were used to analyses each one. There were noticeable differences in the models' performances. In particular, the Logistic Regression and SVM methods demonstrate almost perfect accuracy, followed closely by random forest and KNN. This study gives insight into how well different models perform in comparison and emphasizes the benefits and drawbacks of each in terms of predicting face beauty standards.
Parallel Processing Impact on Random Forest Classifier Performance: A CIFAR-10 Dataset Study Sadiq, Bareen Haval; Zeebaree, Subhi R. M.
The Indonesian Journal of Computer Science Vol. 13 No. 2 (2024): The Indonesian Journal of Computer Science (IJCS)
Publisher : AI Society & STMIK Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.33022/ijcs.v13i2.3803

Abstract

Using the CIFAR-10 dataset, this research investigates how parallel processing affects the Random Forest method's machine learning performance. Accuracy and training time are highlighted in the study as critical performance indicators. Two cases were studied, one with and one without parallel processing. The results show the strong prediction powers of the Random Forest algorithm, which continues to analyze data in parallel while retaining a high accuracy of 97.50%. In addition, training times are notably shortened by parallelization, going from 0.6187 to 0.4753 seconds. The noted increase in time efficiency highlights the importance of parallelization in carrying out activities simultaneously, which enhances the training process's computational efficiency. These results provide important new information about how to optimize machine learning algorithms using parallel processing approaches.