Claim Missing Document
Check
Articles

Found 4 Documents
Search

Random Forest Dengan Random Search Terhadap Ketidakseimbangan Kelas Pada Prediksi Gagal Jantung Muhammad Ali Abubakar; Muliadi Muliadi; Andi Farmadi; Rudy Herteno; Rahmat Ramadhani
Jurnal Informatika Vol 10, No 1 (2023): April 2023
Publisher : LPPM Universitas Bina Sarana Informatika

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.31294/inf.v10i1.14531

Abstract

Prediksi keberlangsungan hidup pasien gagal jantung telah dilakukan pada penelitian untuk mencari tahu tentang kinerja, akurasi, presisi dan performa dari model prediksi ataupun metode yang digunakan dalam penelitian, dengan menggunakan dataset heart failure clinical records. Namun dataset ini memiliki permasalahan yaitu bersifat tidak seimbang yang dapat menurunkan kinerja model prediksi karena cenderung menghasilkan prediksi kelas mayoritas. Pada penelitian ini menggunakan pendekatan level algoritma untuk mengatasi ketidakseimbangan kelas yaitu teknik bagging dengan metode Random Forest lalu digabungkan dengan metode Hyper-Parameter Tuning agar kinerja yang dihasilkan menjadi lebih baik. Selanjutnya model dilatih dengan dataset dan dibandingkan dengan metode lain, hasilnya menunjukkan bahwa Random Forest dengan Random Search Hyper Parameter-Tuning mencapai nilai AUC sebesar 0,906 dan untuk model Random Forest tanpa Random Search memperoleh nilai AUC sebesar 0,866. Prediction of the survival of heart failure patients has been carried out in research to find out about the performance, accuracy, precision and performance of the prediction model or method used in the study, using the heart failure clinical records dataset. However, this dataset has a problem, namely being unbalanced which can reduce the performance of the prediction model because it tends to produce predictions for the majority class. This study uses an algorithm level approach to overcome class imbalance, namely the bagging technique with the Random Forest method and then combined with the Hyper-Parameter Tuning method so that the resulting performance is better. Then the model was trained with the dataset and compared with other methods, the results showed that the Random Forest with Random Search Hyper Parameter-Tuning achieved an AUC value of 0,906 and for the Random Forest model without Random Search the AUC value of 0,866 was obtained. 
IMPLEMENTATION OF INFORMATION GAIN AND PARTICLE SWARM OPTIMIZATION UPON COVID-19 HANDLING SENTIMENT ANALYSIS BY USING K-NEAREST NEIGHBOR Riana Riana; Muhammad I Mazdadi; Irwan Budiman; Muliadi Muliadi; Rudy Herteno
JIKO (Jurnal Informatika dan Komputer) Vol 6, No 1 (2023)
Publisher : JIKO (Jurnal Informatika dan Komputer)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.33387/jiko.v6i1.5260

Abstract

In early 2020, a new virus from Wuhan, China, identified as the coronavirus or COVID-19, shocked the entire world. (Coronavirus Disease 2019). The government has made various attempts to combat this outbreak, despite the fact that the government's involvement in combating Covid-19 has many benefits and disadvantages. One of the most commonly debated subjects on Twitter is the Indonesian government's response to the Covid-19 virus. This research compares the k-nearest neighbor classification technique, Information Gain feature selection with the K-Nearest Neighbor classification algorithm, and Information Gain feature selection and Particle Swarm Optimization optimization with the K-Nearest Neighbor classification algorithm. Comparisons are performed to determine which method is more accurate. Because it is frequently used for text and data categorization, the K-Nearest Neighbor algorithm was selected. The K-Nearest Neighbor algorithm has flaws, including the ability to be fooled by irrelevant characteristics and being less than ideal in finding the value of k. The selection of the Information Gain feature could indeed solve this issue by decreasing Terms that are less important and to optimize the K-Nearest Neighbor categorization, an optimization method with the Particle Swarm Optimization algorithm is employed to maximize the K-Nearest Neighbor classification. According to the results of this research, the K-Nearest Neighbor categorization with Information Gain feature selection and Particle Swarm Optimization optimization is better than the K-Nearest Neighbor model without selecting features and without optimization and is better than the K-Nearest Neighbor model with Information Gain selecting features, notably 87,33% with a value of K 5.
Gender Classification Based on Electrocardiogram Signals Using Long Short Term Memory and Bidirectional Long Short Term Memory Kevin Yudhaprawira Halim; Dodon Turianto Nugrahadi; Mohammad Reza Faisal; Rudy Herteno; Irwan Budiman
Jurnal Ilmiah Teknik Elektro Komputer dan Informatika Vol 9, No 3 (2023): September
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/jiteki.v9i3.26354

Abstract

Gender classification by computer is essential for applications in many domains, such as human-computer interaction or biometric system applications. Generally, gender classification by computer can be done by using a face photo, fingerprint, or voice. However, researchers have demonstrated the potential of the electrocardiogram (ECG) as a biometric recognition and gender classification. In facilitating the process of gender classification based on ECG signals, a method is needed, namely Long Short-Term Memory (LSTM) and Bidirectional Long Short-Term Memory (Bi-LSTM). Researchers use these two methods because of the ability of these two methods to deal with sequential problems such as ECG signals. The inputs used in both methods generally use one-dimensional data with a generally large number of signal features. The dataset used in this study has a total of 10,000 features. This research was conducted on changing the input shape to determine its effect on classification performance in the LSTM and Bi-LSTM methods. Each method will be tested with input with 11 different shapes. The best accuracy results obtained are 79.03% with an input shape size of 100×100 in the LSTM method. Moreover, the best accuracy in the Bi-LSTM method with input shapes of 250×40 is 74.19%. The main contribution of this study is to share the impact of various input shape sizes to enhance the performance of gender classification based on ECG signals using LSTM and Bi-LSTM methods. Additionally, this study contributes for selecting an appropriate method between LSTM and Bi-LSTM on ECG signals for gender classification.
Backward Elimination for Feature Selection on Breast Cancer Classification Using Logistic Regression and Support Vector Machine Algorithms Salsha Farahdiba; Dwi Kartini; Radityo Adi Nugroho; Rudy Herteno; Triando Hamonangan Saragih
IJCCS (Indonesian Journal of Computing and Cybernetics Systems) Vol 17, No 4 (2023): October
Publisher : IndoCEISS in colaboration with Universitas Gadjah Mada, Indonesia.

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.22146/ijccs.88926

Abstract

Breast cancer is a prevalent form of cancer that afflicts women across all nations globally. One of the ways that can be done as a prevention to reduce elevated fatality due to breast cancer is with a detection system that can determine whether a cancer is benign or malignant. Logistic Regression and Support Vector Machine (SVM) classification algorithms are often used to detect this disease, but the use of these two algorithms often doesn’t give optimal results when applied to datasets that have many features, so additional algorithm is needed to improve classification performance by using Backward Elimination feature selection. The comparison of Logistic Regression and SVM algorithms was carried out by applying feature selection to breast cancer data to see the best model. The breast cancer dataset has 30 features and two classes, Benign and Malignant. Backward Elimination has reduced features from 30 features to 13 features, thereby increasing the performance of both classification models. The best classification was obtained by using the Backward Elimination feature selection and linear kernel SVM with an increase in accuracy value from 96.14% to 97.02%, precision from 98.06% to 99.49%, recall from 90.48% to 92.38%, and the AUC from 0.95 to 0.96.