Claim Missing Document
Check
Articles

Found 31 Documents
Search

Analisis Kinerja Algoritma Mesin Pembelajaran untuk Klarifikasi Penyakit Stroke Menggunakan Citra CT Scan Sakinah, Nur; Badriyah, Tessy; Syarif, Iwan
Jurnal Teknologi Informasi dan Ilmu Komputer Vol 7 No 4: Agustus 2020
Publisher : Fakultas Ilmu Komputer, Universitas Brawijaya

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.25126/jtiik.2020743482

Abstract

Stroke adalah suatu kondisi dimana pasokan darah ke otak terganggu sehingga bagian tubuh yang dikendalikan oleh area otak yang rusak tidak dapat berfungsi dengan baik. Penyebab stroke antara lain adalah terjadinya penyumbatan pada pembuluh darah (stroke iskemik) atau pecahnya pembuluh darah (stroke hemoragik). Pasien yang terkena stroke harus segera ditangani secepatnya karena sel otak dapat mati dalam hitungan menit. Tindakan penanganan stroke secara cepat dan tepat dapat mengurangi resiko kerusakan otak dan mencegah terjadinya komplikasi. Penelitian ini bertujuan untuk mengembangkan perangkat lunak yang dapat membaca dan menganalisis citra CT scan dari otak, dan kemudian secara otomatis memprediksi apakah citra CT scan tersebut stroke iskemik atau stroke hemoragik. Data citra CT scan berasal dari Rumah Sakit Umum Haji Surabaya yang diambil selama periode Januari-Mei 2019 dan berasal dari 102 pasien yang terindikasi stroke. Sebelum data gambar tersebut diolah dengan menggunakan beberapa algoritma mesin pembelajaran, data tersebut melalui tahap pre-processing yang bertujuan untuk meningkatkan kualitas citra meliputi konversi citra, pemotongan citra, penskalaan, greyscaling, penghilangan noise dan augmentasi. Tahap selanjutnya adalah ekstraksi fitur menggunakan metode Gray-Level Co-Occurrence Matrix (GLCM). Penelitian ini juga bertujuan untuk membandingkan kinerja lima algoritma mesin pembelajaran yaitu Naïve Bayes, Logistic Regression, Neural Network, Support Vector Machine dan Deep Learning yang diterapkan untuk memprediksi penyakit stroke. Hasil percobaan menunjukkan bahwa algoritma Deep Learning menghasilkan tingkat performansi paling tinggi yaitu nilai akurasi 96.78%, presisi 97.59% dan recall 95.92%. AbstractStroke is a condition in which the blood supply to the brain is interrupted so that parts of the body that are controlled by damaged brain areas cannot function properly. Causes of strokes include blockages in blood vessels (ischemic stroke) or rupture of blood vessels (hemorrhagic stroke). Stroke patients must be treated as soon as possible because brain cells can die within minutes. The handling of stroke patients quickly can reduce the risk of brain damage and prevent complications. This study aims to develop software that can read and analyze CT scan images from the brain, and then automatically predict whether the CT scan images are ischemic stroke or hemorrhagic stroke. The CT scan image data came from the Surabaya Hajj General Hospital which was taken during the January-May 2019 period and came from 102 patients who had indicated a stroke. Before the image data is processed using several machine learning algorithms, the data goes through a pre-processing phase which aims to improve image quality including image conversion, image cutting, scaling, greyscaling, noise removal and augmentation. The next step is feature extraction using the Gray-Level Co-Occurrence Matrix (GLCM) method. This study also aims to compare the performance of five machine learning algorithms, namely Naïve Bayes, Logistic Regression, Neural Networks, Support Vector Machines and Deep Learning which are applied to predict stroke. The experimental results show that the deep learning algorithm produces the highest level of performance where the accuracy value is 96.78%, 97.59% precision and 95.92% recall.
Implementasi dan Optimasi Hyperparameter pada Model Machine learning untuk Prediksi Diabetes dengan Integrasi Aplikasi Telemedicine Pahlevi, Muhammad Nur Riza; Badriyah, Tessy
JEPIN (Jurnal Edukasi dan Penelitian Informatika) Vol 11, No 2 (2025): Volume 11 No 2
Publisher : Program Studi Informatika

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26418/jp.v11i2.94600

Abstract

Deteksi dini diabetes merupakan langkah krusial dalam mencegah komplikasi jangka panjang yang dapat membahayakan kesehatan pasien. Melalui penelitian ini, dirancang dan diuji sebuah sistem prediksi risiko diabetes yang memanfaatkan algoritma machine learning untuk membantu deteksi dini serta mengurangi risiko kesehatan jangka panjang yang terintegrasi dalam aplikasi telemedicine. Dalam konteks ini, machine learning dimanfaatkan untuk melakukan klasifikasi terhadap status diabetes pasien berdasarkan parameter klinis. Empat algoritma digunakan dalam studi ini, yaitu K-Nearest Neighbor (KNN), Support Vector Machine (SVM), Logistic Regression, dan Decision Tree. Keempat algoritma ini dipilih karena mewakili pendekatan klasifikasi yang berbeda, memiliki rekam jejak yang baik dalam penelitian medis, serta dapat dioptimasi untuk meningkatkan akurasi prediksi. Proses optimasi hyperparameter mencakup n_neighbors dan weights pada KNN; C, kernel, dan gamma pada SVM; C dan penalty pada Logistic Regression; serta max_depth dan min_samples_leaf pada Decision Tree untuk memaksimalkan kinerja masing-masing model. Dataset yang digunakan berasal dari RSD Balung Jember dengan total 1.450 data pasien dan 12 fitur medis. Hasil evaluasi menunjukkan bahwa model SVM dan Decision Tree memberikan performa terbaik, dengan akurasi masing-masing mencapai 98,97% dan 99,66%, serta nilai F1-score hingga 1,00. Sementara itu, metrik precision dan recall juga menunjukkan hasil yang tinggi, menandakan keandalan model dalam mengklasifikasikan kondisi diabetes secara tepat. Sistem prediksi yang dibangun kemudian diintegrasikan ke dalam platform telemedicine berbasis web menggunakan framework Laravel. Evaluasi integrasi menunjukkan bahwa model mampu memberikan hasil prediksi secara real-time dengan waktu respons rata-rata 0,86 detik dan akurasi tetap konsisten di atas 98% setelah diimplementasikan dalam aplikasi. Hal ini menunjukkan bahwa pendekatan berbasis machine learning dengan dukungan optimasi parameter berhasil menghasilkan sistem deteksi dini diabetes yang akurat, efisien, dan dapat diterapkan secara praktis dalam skenario layanan kesehatan digital di Indonesia.
Development of a Java Library with Bacterial Foraging Optimization for Feature Selection of High-Dimensional Data Badriyah, Tessy; Syarif, Iwan; Hardiyanti, Fitriani Rohmah
JOIV : International Journal on Informatics Visualization Vol 8, No 1 (2024)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.8.1.2149

Abstract

High-dimensional data allows researchers to conduct comprehensive analyses. However, such data often exhibits characteristics like small sample sizes, class imbalance, and high complexity, posing challenges for classification. One approach employed to tackle high-dimensional data is feature selection. This study uses the Bacterial Foraging Optimization (BFO) algorithm for feature selection. A dedicated BFO Java library is developed to extend the capabilities of WEKA for feature selection purposes. Experimental results confirm the successful integration of BFO. The outcomes of BFO's feature selection are then compared against those of other evolutionary algorithms, namely Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Artificial Bee Colony (ABC), and Ant Colony Optimization (ACO).  Comparison of algorithms conducted using the same datasets.  The experimental results indicate that BFO effectively reduces features while maintaining consistent accuracy. In 4 out of 9 datasets, BFO outperforms other algorithms, showcasing superior processing time performance in 6 datasets. BFO is a favorable choice for selecting features in high-dimensional datasets, providing consistent accuracy and effective processing. The optimal fraction of features in the Ovarian Cancer dataset signifies that the dataset retains a minimal number of selected attributes. Consequently, the learning process gains speed due to the reduced feature set. Remarkably, accuracy substantially increased, rising from 0.868 before feature selection to 0.886 after feature selection. The classification processing time has also been significantly shortened, completing the task in just 0.3 seconds, marking a remarkable improvement from the previous 56.8 seconds.
Personalized Tourism in Surabaya: A Bayesian Network Approach Faradisa, Rosiyah; Badriyah, Tessy; Maulana, Hanan Ammar; Assidiqi, Moh Hasbi
JOIV : International Journal on Informatics Visualization Vol 9, No 3 (2025)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.9.3.3376

Abstract

This study investigates the application of Bayesian Networks in developing a personalized tourist destination recommendation system focused on Surabaya, Indonesia. The research incorporates push and pulls factors alongside tourist activities as key input variables to model decision-making processes. Two distinct Directed Acyclic Graph (DAG) structures are evaluated: one proposed based on existing theoretical frameworks and another generated from empirical respondent data. The dataset comprises responses from 1,350 tourists visiting twenty-five popular attractions in Surabaya. The analysis reveals that Bayesian Networks effectively identify correlations between various influencing factors. From the tests carried out, the accuracy obtained from the two DAG structures did not significantly differ. The proposed DAG achieved 35% accuracy for the top-ranked destination recommendations, while the data-driven DAG was 25%. Both achieved 75% accuracy in the top five recommendations. The accuracy increased as the number of output states was reduced. Meanwhile, in the test with binary output, BN was able to accurately classify tourist destinations with an average accuracy of 95% for both DAGs. These findings highlight the potential of Bayesian Networks to enhance tourism decision support systems by providing nuanced insights into tourists' preferences and motivations. For further research, hybridization or feature engineering can be employed to improve model accuracy. In addition, determining more appropriate push factors and tourist activities based on the tourism case studies also needs to be done to obtain better tourist preferences. This research highlights the promising role of Bayesian Networks in improving the personalization and effectiveness of tourist recommendations.
Building a Recommendation System for Online Shopping Based on Item-Based Collaborative Filtering Badriyah, Tessy; Prasetyaningrum, Ira; Adhi P., Basik
Proceeding ISETH (International Summit on Science, Technology, and Humanity) 2015: Proceeding ISETH (International Conference on Science, Technology, and Humanity)
Publisher : Universitas Muhammadiyah Surakarta

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.23917/iseth.2373

Abstract

This research applied an innovation in developing online shopping, using recommendation system. Recommendation System applies finding knowledge technique which is called itembased Collaborative Filtering. This works with by building information about items that are preferred by the customers. Collaborative Filtering filters data based on similarities or certain characteristics, so that the system is able to provide information based on patterns from a certain group of data that are almost the same.With recommendation system, customers could benefit from the recommended items which they may favour, generated automatically by the system. It is hoped that it could improve the convenience to shop and reduce the time needed by customers to search for items. Therefore it could increase the competitiveness of online shops that use a recommendation system.
Hybrid Modeling KMeans – Genetic Algorithms in the Health Care Data Badriyah, Tessy
EMITTER International Journal of Engineering Technology Vol 2 No 1 (2014)
Publisher : Politeknik Elektronika Negeri Surabaya (PENS)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.24003/emitter.v2i1.18

Abstract

K-Means is one of the major algorithms widely used in clustering due to its good computational performance. However, K-Means is very sensitive to the initially selected points which randomly selected, and therefore it does not always generate optimum solutions. Genetic algorithm approach can be applied to solve this problem. In this research we examine the potential of applying hybrid GA- KMeans with focus on the area of health care data. We proposed a new technique using hybrid method combining KMeans Clustering and Genetic Algorithms, called the “Hybrid K-Means Genetic Algorithms” (HKGA). HKGA combines the power of Genetic Algorithms and the efficiency of K-Means Clustering. We compare our results with other conventional algorithms and also with other published research as well. Our results demonstrate that the HKGA achieves very good results and in some cases superior to other methods.Keywords: Machine Learning, K-Means, Genetic Algorithms, Hybrid KMeans Genetic Algorithm (HGKA).
Comparison of The Data-Mining Methods in Predicting The Risk Level of Diabetes Wicaksono, Andri Permana; Badriyah, Tessy; Basuki, Achmad
EMITTER International Journal of Engineering Technology Vol 4 No 1 (2016)
Publisher : Politeknik Elektronika Negeri Surabaya (PENS)

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (813.592 KB) | DOI: 10.24003/emitter.v4i1.119

Abstract

Mellitus Diabetes is an illness that happened in consequence of the too high glucose level in blood because the body could not release or use insulin normally. The purpose of this research is to compare the two methods in The data-mining, those are a Regression Logistic method and a Bayesian method, to predict the risk level of diabetes by web-based application and nine attributes of patients data. The data which is used in this research are 1450 patients that are taken from RSD BALUNG JEMBER, by collecting data from 26 September 2014 until 30 April 2015. This research uses performance measuring from two methods by using discrimination score with ROC curve (Receiver Operating Characteristic).  On the experiment result, it showed that two methods, Regression Logistic method and Bayesian method, have different performance excess score and are good at both. From the highest accuracy measurement and ROC using the same dataset, where the excess of Bayesian has the highest accuracy with 0,91 in the score while Regression Logistic method has the highest ROC score with 0.988, meanwhile on Bayesian, the ROC is 0.964. In this research, the plus of using Bayesian is not only can use categorical but also numerical.
Influence of Logistic Regression Models For Prediction and Analysis of Diabetes Risk Factors Maulana, Yufri Isnaini Rochmat; Badriyah, Tessy; Syarif, Iwan
EMITTER International Journal of Engineering Technology Vol 6 No 1 (2018)
Publisher : Politeknik Elektronika Negeri Surabaya (PENS)

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (419.424 KB) | DOI: 10.24003/emitter.v6i1.258

Abstract

Diabetes is a very serious chronic. Diabetes can occurs when the pancreas doesn't produce enough insulin (a hormone used to regulate blood sugar), cause glucose in the blood to be high. The purpose of this study is to provide a different approach in dealing with cases of diabetes, that's with data mining techniques mengguanakan logistic regression algorithm to predict and analyze the risk of diabetes that is implemented in the mobile framework. The dataset used for data modeling using logistic regression algorithm was taken from Soewandhie Hospital on August 1 until September 30, 2017. Attributes obtained from the Hospital Laboratory have 11 attribute, with remove 1 attribute that is the medical record number so it becomes 10 attributes. In the data preparation dataset done preprocessing process using replace missing value, normalization, and feature extraction to produce a good accuracy. The result of this research is performance measure with ROC Curve, and also the attribute analysis that influence to diabetes using p-value. From these results it is known that by using modeling logistic regression algorithm and validation test using leave one out obtained accuracy of 94.77%. And for attributes that affect diabetes is 9 attributes, age, hemoglobin, sex, blood sugar pressure, creatin serum, white cell count, urea, total cholesterol, and bmi. And for attributes triglycerides have no effect on diabetes.
Arrhythmia Classification Using Long Short-Term Memory with Adaptive Learning Rate Assodiky, Hilmy; Syarif, Iwan; Badriyah, Tessy
EMITTER International Journal of Engineering Technology Vol 6 No 1 (2018)
Publisher : Politeknik Elektronika Negeri Surabaya (PENS)

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (847.757 KB) | DOI: 10.24003/emitter.v6i1.265

Abstract

Arrhythmia is a heartbeat abnormality that can be harmless or harmful. It depends on what kind of arrhythmia that the patient suffers. People with arrhythmia usually feel the same physical symptoms but every arrhythmia requires different treatments. For arrhythmia detection, the cardiologist uses electrocardiogram that represents the cardiac electrical activity. And it is a kind of sequential data with high complexity. So the high performance classification method to help the arrhythmia detection is needed. In this paper, Long Short-Term Memory (LSTM) method was used to classify the arrhythmia. The performance was boosted by using AdaDelta as the adaptive learning rate method. As a comparison, it was compared to LSTM without adaptive learning rate. And the best result that showed high accuracy was obtained by using LSTM with AdaDelta. The correct classification rate was 98% for train data and 97% for test data.
Classification Algorithms of Maternal Risk Detection For Preeclampsia With Hypertension During Pregnancy Using Particle Swarm Optimization Tahir, Muhlis; Badriyah, Tessy; Syarif, Iwan
EMITTER International Journal of Engineering Technology Vol 6 No 2 (2018)
Publisher : Politeknik Elektronika Negeri Surabaya (PENS)

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (565.13 KB) | DOI: 10.24003/emitter.v6i2.287

Abstract

Preeclampsia is a pregnancy abnormality that develops after 20 weeks of pregnancy characterized by hypertension and proteinuria.  The purpose of this research was to predict the risk of preeclampsia level in pregnant women during pregnancy process using Neural Network and Deep Learning algorithm, and compare the result of both algorithm. There are 17 parameters that taken from 1077 patient data in Haji General Hospital Surabaya and two hospitals in Makassar start on December 12th 2017 until February 12th 2018. We use particle swarm optimization (PSO) as the feature selection algorithm. This experiment shows that PSO can reduce the number of attributes from 17 to 7 attributes. Using LOO validation on the original data show that the result of Deep Learning has the accuracy of 95.12% and it give faster execution time by using the reduced dataset (eight-speed quicker than the original data performance). Beside that the accuracy of Deep Learning increased 0.56% become 95.68%. Generally, PSO gave the excellent result in the significantly lowering sum attribute as long as keep and improve method and precision although lowering computational period. Deep Learning enables end-to-end framework, and only need input and output without require for tweaking the attributes or features and does not require a long time and complex systems and understanding of the deep data on computing.