cover
Contact Name
Triwiyanto
Contact Email
triwiyanto123@gmail.com
Phone
+628155126883
Journal Mail Official
editorial.jeeemi@gmail.com
Editorial Address
Department of Electromedical Engineering, Poltekkes Kemenkes Surabaya Jl. Pucang Jajar Timur No. 10, Surabaya, Indonesia
Location
Kota surabaya,
Jawa timur
INDONESIA
Journal of Electronics, Electromedical Engineering, and Medical Informatics
ISSN : -     EISSN : 26568632     DOI : https://doi.org/10.35882/jeeemi
The Journal of Electronics, Electromedical Engineering, and Medical Informatics (JEEEMI) is a peer-reviewed open-access journal. The journal invites scientists and engineers throughout the world to exchange and disseminate theoretical and practice-oriented topics which covers three (3) majors areas of research that includes 1) Electronics, 2) Biomedical Engineering, and 3)Medical Informatics (emphasize on hardware and software design). Submitted papers must be written in English for an initial review stage by editors and further review process by a minimum of two reviewers.
Articles 287 Documents
Impact of a Synthetic Data Vault for Imbalanced Class in Cross-Project Defect Prediction Putri Nabella; Rudy Herteno; Setyo Wahyu Saputro; Mohammad Reza Faisal; Friska Abadi
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 6 No 2 (2024): April
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v6i2.409

Abstract

Software Defect Prediction (SDP) is crucial for ensuring software quality. However, class imbalance (CI) poses a significant challenge in predictive modeling. This study delves into the effectiveness of the Synthetic Data Vault (SDV) in mitigating CI within Cross-Project Defect Prediction (CPDP). Methodologically, the study addresses CI across ReLink, MDP, and PROMISE datasets by leveraging SDV to augment minority classes. Classification utilizing Decision Tree (DT), Logistic Regression (LR), K-Nearest Neighbors (KNN), Naive Bayes (NB), and Random Forest (RF), also model performance is evaluated using AUC and t-Test. The results consistently show that SDV performs better than SMOTE and other techniques in various projects. This superiority is evident through statistically significant improvements. KNN dominance in average AUC results, with values 0.695, 0.704, and 0.750. On ReLink, KNN show 16.06% improvement over the imbalanced and 12.84% over SMOTE. Similarly, on MDP, KNN 20.71% improvement over the imbalanced and a 10.16% over SMOTE. Moreover, on PROMISE, KNN 13.55% improvement over the imbalanced and 7.01% over SMOTE. RF displays moderate performance, closely followed by LR and DT, while NB lags behind. The statistical significance of these findings is confirmed by t-Test, all below the 0.05 threshold. These findings underscore SDV's potential in enhancing CPDP outcomes and tackling CI challenges in SDV. With KNN as the best classification algorithm. Adoption of SDV could prove to be a promising tool for enhancing defect detection and CI mitigation
Application Of SMOTE To Address Class Imbalance In Diabetes Disease Classification Utilizing C5.0, Random Forest, And SVM M. Khairul Rezki; Mazdadi, Muhammad Itqan; Indriani, Fatma; Muliadi, Muliadi; Saragih, Triando Hamonangan; Athavale, Vijay Annant
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 6 No 4 (2024): October
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v6i4.434

Abstract

The implementation of SMOTE to tackle class imbalance in classification frequently results in suboptimal outcomes, owing to the intricacy of the dataset and the multitude of attributes at play. Consequently, alternative classification models were explored through experimentation to gauge their precision. This research aims to compare the precision of C5.0, Random Forest, and SVM classification models both with and without SMOTE. The methodology encompasses dataset selection, an overview of classification algorithms (C5.0, Random Forest, SVM), SMOTE technique, validation via split validation, preprocessing involving min-max normalization, and execution evaluation utilizing confusion matrices and AUC analysis. The dataset was sourced by Kaggle, specifically to rectify class imbalance in a diabetes dataset using SMOTE, consisting of 768 instances, with 268 samples for diabetic cases and 500 samples for non-diabetic cases. Prior to SMOTE application, the classification precision for C5.0, Random Forest, and SVM were 0.714, 0.733, and 0.746 respectively, with corresponding AUC values of 0.745, 0.824, and 0.799. Post-SMOTE, the precision depicts for the same techniques were 0.603, 0.727, and 0.727, with AUC values of 0.734, 0.831, and 0.794 respectively. It can be inferred that there's minimal impact post-SMOTE across the three classification models due to potential overfitting on the dataset, leading to excessive reliance on synthesized data for minority classes, resulting in diminished model execution, precision, and AUC scores.
Comparison of the Adaboost Method and the Extreme Learning Machine Method in Predicting Heart Failure Muhammad Nadim Mubaarok; Triando Hamonangan Saragih; Muliadi; Fatma Indriani; Andi Farmadi; Rizal, Achmad
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 6 No 3 (2024): July
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v6i3.440

Abstract

Heart disease, which is classified as a non-communicable disease, is the main cause of death every year. The involvement of experts is considered very necessary in the process of diagnosing heart disease, considering its complex nature and potential severity. Machine Learning Algorithms have emerged as powerful tools capable of effectively predicting and detecting heart diseases, thereby reducing the challenges associated with their diagnosis. Notable examples of such algorithms include Extreme Learning Machine Algorithms and Adaptive Boosting, both of which represent Machine Learning techniques adapted for classification purposes. This research tries to introduce a new approach that relies on the use of one parameter. Through careful optimization of algorithm parameters, there is a marked improvement in the accuracy of machine learning predictions, a phenomenon that underscores the importance of parameter tuning in this domain. In this research, the Heart Failure dataset serves as the focal point, with the aim of demonstrating the optimal level of accuracy that can be achieved through the use of Machine Learning algorithms. The results of this study show an average accuracy of 0.83 for the Extreme Learning Machine Algorithm and 0.87 for Adaptive Boosting, the standard deviation for both methods is “0.83±0.02” for Extreme Machine Learning Algorithm and “0.87±0.03” for Adaptive Boosting thus highlighting the efficacy of these algorithms in the context of heart disease prediction. In particular, entering the Learning Rate parameter into Adaboost provides better results when compared with the previous algorithm. Our research findings underline the supremacy of Extreme Learning Machine Algorithms and Adaptive Improvement, especially when combined with the introduction of a single parameter, it can be seen that the addition of parameters results in increased accuracy performance when compared to previous research using standard methods alone.
Bi-directional Long Short-Term Memory with Bird Mating Optimizer based Spectrum Sensing Technique for Cognitive Radio Networks M, Saraswathi; E, Logashanmugam
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 6 No 4 (2024): October
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v6i4.441

Abstract

Cognitive radio networks (CRN) enable the wireless devices to sense the radio spectrum, determine the frequency state channels, and reconfigure the communication variables for satisfying the QoS needs by reducing the energy utilization. In cognitive radio, the detection of principal user signals is crucial for secondary users in order to make the best use of available spectrum (CR). The problem with conventional spectrum sensing approaches is that they have a high rate of missed detections and false alarms, which makes it difficult to make effective use of the spectrum. In instruction to recover the correctness of the detection of free spectrum, deep learning-based spectrum sensing is employed. For resolving the drawbacks of traditional energy detection models, this paper presents a new spectral sensing technique for cognitive radio networks (SST-CRN). Recently published research in spectrum sensing has placed a high value on deep learning that is model-agnostic as a result of this. In particular, long-short term memory (LSTM) networks perform exceptionally well at extracting spatial and temporal information from input data in deep learning. The proposed model, Bidirectional Long Short-Term Memory (Bi-LSTM) with Bird Mating Optimization (BMO), makes it possible to create nonlinear threshold-based systems more quickly and easily than previously possible. The proposed Bi-LSTM with BMO technique involves two stages of operations namely offline and online. The offline stage creates the non-linear threshold value to detect energy. In addition, the online stage automated selects a decision function saved in offline stage for determining the existence of primary user. The experiments were carried out and the results were analyzed using the help of the RadioML2016.10b dataset. A greater spectrum detection performance over previous sensing models has been obtained using a combination of the B-LSTM model and the BMO model, it has been discovered.
Simple Data Augmentation and U-Net CNN for Neclui Binary Segmentation on Pap Smear Images Desiani, Anita; Irmeilyana; Zayanti, Des Alwine; Utama, Yadi; Arhami, Muhammad; Affandi, Azhar Kholiq; Sasongko, Muhammad Aditya; Ramayanti, Indri
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 6 No 3 (2024): July
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v6i3.442

Abstract

The nuclei and cytoplasm can be detected through Pap smear images. The image consists of cytoplasm and nuclei. In Pap smear image, nuclei are the most critical cell components and undergo significant changes in cervical cancer disorders. To help women avoid cervical cancer, early detection of nuclei abnormalities can be done in various ways, one of which is by separating the nuclei from the non-nucleis part by image segmentation it. In this study, segmentation of the separation of nuclei with other parts of the Pap smear image is carried out by applying the U-Net CNN architecture. The amount of pap smear image data is limited. The limiter data can cause overfitting on U-Net CNN model. Meanwhile, U-Net CNN needs a large amount of training data to get great performance results for classification. One technique to increase data is augmentation. Simple techniques for augmentation are flip and rotation. The result of the application of U-Net CNN architecture and augmentation is a binary image consisting of two parts, namely the background and the nuclei. Performance evaluation of combination U-Net CNN and augmentation technique is accuracy, sensitivity, specificity, and F1-score. The results performance of the method for accuracy, sensitivity, and F1-score values are greater than 90%, while the specificity is still below 80%. From these performance results, it shows that the U-Net CNN combine augmentation technique is excellent to detect nuclei in compared to detect non nuclei cell on pap smear image.
A Comparative Study: Application of Principal Component Analysis and Recursive Feature Elimination in Machine Learning for Stroke Prediction Hermiati, Arya Syifa; Herteno, Rudy; Indriani, Fatma; Saragih, Triando Hamonangan; Muliadi; Triwiyanto, Triwiyanto
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 6 No 3 (2024): July
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v6i3.446

Abstract

Stroke is a disease that occurs in the brain and can cause both vocal and global brain dysfunction. Stroke research mainly aims to predict risk and mortality. Machine learning can be used to diagnose and predict diseases in the healthcare field, especially in stroke prediction. However, collecting medical record data to predict a disease usually makes much noise because not all variables are important and relevant to the prediction process. In this case, dimensionality reduction is essential to remove noisy (i.e., irrelevant) and redundant features. This study aims to predict stroke using Recursive Feature Elimination as feature selection, Principal Component Analysis as feature extraction, and a combination of Recursive Feature Elimination and Principal Component Analysis. The dataset used in this research is stroke prediction from Kaggle. The research methodology consists of pre-processing, SMOTE, 10-fold Cross-Validation, feature selection, feature extraction, and machine learning, which includes SVM, Random Forest, Naive Bayes, and Linear Discriminant Analysis. From the results obtained, the SVM and Random Forest get the highest accuracy value of 0.8775 and 0.9511 without using PCA and RFE, Naive Bayes gets the highest value of 0.7685 when going through PCA with selection of 20 features followed by RFE feature selection with selection of 5 features, and LDA gets the highest accuracy with 20 features from feature selection and continued feature extraction with a value of 0. 7963. It can be concluded in this study that SVM and Random Forest get the highest accuracy value without PCA and RFE techniques, while Naive Bayes and LDA show better performance using a combination of PCA and RFE techniques. The implication of this research is to know the effect of RFE and PCA on machine learning to improve stroke prediction.
Expert System for Pregnancy Risk Diagnosis Using Decision Tree and Dempster-Shafer Method Wiharto; Azizah, Setia Mukti; Hendrasuryawan, Brilyan
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 6 No 4 (2024): October
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v6i3.448

Abstract

The high Maternal Mortality Rate (MMR) remains a severe concern in maternal healthcare. One of the reasons is the delay in recognizing early danger signs during pregnancy. To address this issue, there is a proposed solution in the form of developing an expert system aimed at swiftly and efficiently diagnosing pregnancy risks in pregnant women using the Decision Tree and Dempster Shafer methods. The Decision Tree method is employed for symptom classification, while Dempster Shafer provides confidence values for existing facts. This research collects data from the dataset, the Poedji Rochjati Score Card (KSPR), and qualitative data through expert interviews. From the collected data, knowledge acquisition processes are then carried out to extract knowledge using the ID3 Decision Tree and combine all symptoms from the gathered data. The processed data is then represented as a decision tree and assigned confidence values. The development of this expert system utilizes the Laravel framework with PHP language and MySQL database. System validation involves patients as participants and midwives as experts and testers. Testing was conducted on March 13 and 16, 2024, involving 16 patients at the Gatak Community Health Center. The system evaluation results show an accuracy rate of 93.75%. This value indicates that the system can operate effectively. Thus, it can be recommended for use in diagnosing pregnancy risks.
A Comparative Study of Improved Ensemble Learning Algorithms for Patient Severity Condition Classification Edi Ismanto; Abdul Fadlil; Anton Yudhana; Kitagawa, Kodai
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 6 No 3 (2024): July
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v6i3.452

Abstract

The evolution of Electronic Health Records (EHR) has facilitated comprehensive patient record-keeping, enhancing healthcare delivery and decision-making processes. Despite these advancements, analyzing EHR data using ensemble machine learning methods poses unique challenges. These challenges include data dimensionality, imbalanced class distributions, and the need for effective hyperparameter tuning to optimize model performance. The study conducted a thorough comparative analysis of various ensemble machine learning (EML) models using Electronic Health Record (EHR) datasets. After addressing data imbalance and reducing dimensionality, the accuracy of the EML models showed significant improvement. Notably, the Gradient Boosting Machine (GBM) and CatBoost models exhibited superior performance with an accuracy of 73%, achieved through experiments involving dimensionality reduction and handling of imbalanced data. Furthermore, optimization techniques such as Grid Search and Random Search were employed to enhance the EML models. The results of model optimization revealed that the GBM + Random Search model performed the best, achieving an accuracy of 74%, followed by the XGBoost + Grid Search model with an accuracy of 73%. The GBM model also excelled in distinguishing between positive and negative classes, boasting the highest Area under Curve (AUC) value of 0.78, indicative of its superior classification capabilities compared to other models. This study emphasizes the significance of incorporating cutting-edge EML techniques into clinical workflows and emphasizes the revolutionary potential of GBM in classification modeling for patient severity conditions. Future research should focus on deep learning (DL) applications and the integration of these models.
Analysis of Important Features in Software Defect Prediction Using Synthetic Minority Oversampling Techniques (SMOTE), Recursive Feature Elimination (RFE) and Random Forest Ghinaya, Helma; Herteno, Rudy; Faisal, Mohammad Reza; Farmadi, Andi; Indriani, Fatma
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 6 No 3 (2024): July
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v6i3.453

Abstract

Software Defect Prediction (SDP) is essential for improving software quality during testing. As software systems grow more complex, accurately predicting defects becomes increasingly challenging. One of the challenges faced is dealing with imbalanced class distributions, where the number of defective instances is significantly lower than non-defective ones. To tackle the imbalanced class issue, use the SMOTE technique. Random Forest as a classification algorithm is due to its ability to handle non-linear data, its resistance to overfitting, and its ability to provide information about the importance of features in classification. This research aims to evaluate important features and measure accuracy in SDP using the SMOTE+RFE+Random Forest technique. The dataset used in this study is NASA MDP D", which included 12 data sets. The method used combines SMOTE, RFE, and random forest techniques. This study is conducted in two stages of approach. The first stage uses the RFE+Random Forest technique; the second stage involves adding the SMOTE technique before RFE and Random Forest to measure the accurate data from NASA MDP. The result of this study is that the use of the SMOTE technique enhances accuracy across most datasets, with the best performance achieved on the MC1 dataset with an accuracy of 0.9998. Feature importance analysis identifies "maintenance severity" and "cyclomatic density" as the most crucial features in data modeling for SDP. Therefore, the SMOTE+RFE+RF technique effectively improves prediction accuracy across various datasets and successfully addresses class imbalance issues.
A Comparative Analysis of Polynomial-fit-SMOTE Variations with Tree-Based Classifiers on Software Defect Prediction Nur Hidayatullah, Wildan; Herteno, Rudy; Reza Faisal, Mohammad; Adi Nugroho, Radityo; Wahyu Saputro, Setyo; Akhtar, Zarif Bin
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 6 No 3 (2024): July
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v6i3.455

Abstract

Software defects present a significant challenge to the reliability of software systems, often resulting in substantial economic losses. This study examines the efficacy of polynomial-fit SMOTE (pf-SMOTE) variants in combination with tree-based classifiers for software defect prediction, utilising the NASA Metrics Data Program (MDP) dataset. The research methodology involves partitioning the dataset into training and test subsets, applying pf-SMOTE oversampling, and evaluating classification performance using Decision Trees, Random Forests, and Extra Trees. Findings indicate that the combination of pf-SMOTE-star oversampling with Extra Tree classification achieves the highest average accuracy (90.91%) and AUC (95.67%) across 12 NASA MDP datasets. This demonstrates the potential of pf-SMOTE variants to enhance classification effectiveness. However, it is important to note that caution is warranted regarding potential biases introduced by synthetic data. These findings represent a significant advancement over previous research endeavors, underscoring the critical role of meticulous algorithm selection and dataset characteristics in optimizing classification outcomes. Noteworthy implications include advancements in software reliability and decision support for software project management. Future research may delve into synergies between pf-SMOTE variants and alternative classification methods, as well as explore the integration of hyperparameter tuning to further refine classification performance.