Muhammad Ibnu Choldun Rachmatullah
Universitas Logistik dan Bisnis Internasional, Bandung

Published : 2 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 2 Documents
Search

Perbandingan Metoda K-NN, Random Forest dan 1D CNN untuk Mengklasifikasi Data EEG Eye State Muhammad Ibnu Choldun Rachmatullah; Aryaputra Wicaksono; Virdiandry Putratama
Journal of Information System Research (JOSH) Vol 4 No 2 (2023): January 2023
Publisher : Forum Kerjasama Pendidikan Tinggi (FKPT)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.47065/josh.v4i2.2998

Abstract

The use of machine learning / machine learning methods is very important in developing identification of the status of the human eye, especially in terms of processing Electroencephalogram (EEG) signals to identify eye status. In previous research the method used can be a combination method between supervised learning and unsupervised learning, or a single method using supervised learning. In this study, the EEG Eye State classification uses a single method with supervised learning, namely using the following methods: K-nearest neghbors (k-NN), random forest, and 1D Convolutional Neural Networks (1D CNNs). The performance of the three classifier methods is measured using four measures, namely: accuracy, recall, precision, and F1-Score. From the experimental results it was found that the k-NN method has the best performance compared to the other two methods in terms of the four measures used, where the value of each measure is: accuracy = 82.30%; recall=82.30%; precision= 82.36%; and F1-Score=82.30%. K-NN is more suitable for classifying EEG Eye State than the other two methods, because all input attributes are from the dataset. has a data type of real numbers.
Penerapan SMOTE untuk Meningkatan Kinerja Klasifikasi Penilaian Kredit Muhammad Ibnu Choldun Rachmatullah
JURIKOM (Jurnal Riset Komputer) Vol 10, No 1 (2023): Februari 2023
Publisher : STMIK Budi Darma

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30865/jurikom.v10i1.5612

Abstract

Machine learning techniques are widely used in various fields and data is needed to train models. However, the distribution of classes in most real-world datasets turns out to be not always balanced, and can be very imbalanced. If the data is imbalanced, the performance of the classifier is highly dependent on the majority class, causing problems in determining performance. One technique that can be applied to balance the data is the Synthetic Minority Oversampling Technique (SMOTE). SMOTE is applied to credit scoring using the German Credit Data (GCD) dataset, and then classified using four classification methods, namely: random forest, K-Nearest Neighbor (KNN), Support Vector Machine (SVM), and Multilayer Perceptron (MLP). The performance measure of implementing SMOTE in each classifier method is measured using: recall, precision, F1-Score, and AUC. Accuracy values are also measured to see if the accuracy is suitable for calculating performance on imbalanced datasets. Based on performance measures: recall, precision, F1-Score, and AUC, then applying SMOTE to the dataset and then classifying it using four methods shows an increase in performance. The highest performance measure: recall = 82.00% with the random forest method, precision = 75.35 with the MLP method, F1-Score = 76.93% with the MLP method, and AUC = 0.832 with the random forest method. The accuracy value after SMOTE slightly decreased in the random forest, KNN, and SVM methods, while with MLP the accuracy value increased slightly. The contribution of this research is to show the need for imbalanced data handling to improve the performance of classifier algorithms, especially for credit rating datasets.