Harits Ar Rasyid
Universitas Negeri Malang

Published : 2 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 2 Documents
Search

Classification of Engineering Journals Quartile using Various Supervised Learning Models Nastiti Susetyo Fanany Putri; Aji Prasetya Wibawa; Harits Ar Rasyid; Anik Nur Handayani; Andrew Nafalski; Edinar Valiant Hawali; Jehad A.H. Hammad
ILKOM Jurnal Ilmiah Vol 15, No 1 (2023)
Publisher : Prodi Teknik Informatika FIK Universitas Muslim Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.33096/ilkom.v15i1.1483.101-106

Abstract

In scientific research, journals are among the primary sources of information. There are quartiles or categories of quality in journals which are Q1, Q2, Q3, and Q4. These quartiles represent the assessment of journal. A classification machine learning algorithm is developed as a means in the categorization of journals. The process of classifying data to estimate an item class with an unknown label is called classification. Various classification algorithms, such as K-Nearest Neighbor (KNN), Naïve Bayes, and Support Vector Machine (SVM) are employed in this study, with several situations for exchanging training and testing data. Cross-validation with Confusion Matrix values of accuracy, precision, recall, and error classification is used to analyzed classification performance. The classifier with the finest accuracy rate is KNN with average accuracy of 70%, Naïve Bayes at 60% and SVM at 40%. This research suggests assumption that algorithms used in this article can approach SJR classification system.
Boosting and bagging classification for computer science journal Nastiti Susetyo Fanany Putri; Aji Prasetya Wibawa; Harits Ar Rasyid; Andrew Nafalski; Ummi Rabaah Hasyim
International Journal of Advances in Intelligent Informatics Vol 9, No 1 (2023): March 2023
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v9i1.985

Abstract

In recent years, data processing has become an issue across all disciplines. Good data processing can provide decision-making recommendations. Data processing is covered in academic data processing publications, including those in computer science. This topic has grown over the past three years, demonstrating that data processing is expanding and diversifying, and there is a great deal of interest in this area of study. Within the journal, groupings (quartiles) indicate the journal's influence on other similar studies. SCImago provides this category. There are four quartiles, with the highest quartile being 1 and the lowest being 4. There are, however, numerous differences in class quartiles, with different quartile values for the same journal in different disciplines. Therefore, a method of categorization is provided to solve this issue. Classification is a machine-learning technique that groups data based on the supplied label class. Ensemble Boosting and Bagging with Decision Tree (DT) and Gaussian Nave Bayes (GNB) were utilized in this study. Several modifications were made to the ensemble algorithm's depth and estimator settings to examine the influence of adding values on the resultant precision. In the DT algorithm, both variables are altered, whereas, in the GNB algorithm, just the estimator's value is modified. Based on the average value of the accuracy results, it is known that the best algorithm for computer science datasets is GNB Bagging, with values of 68.96%, 70.99%, and 69.05%. Second-place XGBDT has 67.75% accuracy, 67.69% precision, and 67.83 recall. The DT Bagging method placed third with 67.31 percent recall, 68.13 percent precision, and 67.30 percent accuracy. The fourth sequence is the XGBoost GNB approach, which has an accuracy of 67.07%, a precision of 68.85%, and a recall of 67.18%. The Adaboost DT technique ranks in the fifth position with an accuracy of 63.65%, a precision of 64.21 %, and a recall of 63.63 %. Adaboost GNB is the least efficient algorithm for this dataset since it only achieves 43.19 % accuracy, 48.14 % precision, and 43.2% recall. The results are still quite far from the ideal. Hence the proposed method for journal quartile inequality issues is not advised.