cover
Contact Name
Triwiyanto
Contact Email
triwiyanto123@gmail.com
Phone
+628155126883
Journal Mail Official
editorial.jeeemi@gmail.com
Editorial Address
Department of Electromedical Engineering, Poltekkes Kemenkes Surabaya Jl. Pucang Jajar Timur No. 10, Surabaya, Indonesia
Location
Kota surabaya,
Jawa timur
INDONESIA
Journal of Electronics, Electromedical Engineering, and Medical Informatics
ISSN : -     EISSN : 26568632     DOI : https://doi.org/10.35882/jeeemi
The Journal of Electronics, Electromedical Engineering, and Medical Informatics (JEEEMI) is a peer-reviewed open-access journal. The journal invites scientists and engineers throughout the world to exchange and disseminate theoretical and practice-oriented topics which covers three (3) majors areas of research that includes 1) Electronics, 2) Biomedical Engineering, and 3)Medical Informatics (emphasize on hardware and software design). Submitted papers must be written in English for an initial review stage by editors and further review process by a minimum of two reviewers.
Articles 25 Documents
Search results for , issue "Vol 7 No 2 (2025): April" : 25 Documents clear
Liver Cirrhosis Classification using Extreme Gradient Boosting Classifier and Harris Hawk Optimization as Hyperparameter Tuning Nalasari , Lista Tri; Anam, Syaiful; Shofianah, Nur
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 2 (2025): April
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i2.730

Abstract

This study proposes an early diagnosis model based on Machine Learning for liver cirrhosis classification using the Hepatitis C dataset, which is the leading cause of cirrhosis, from UCI ML. The classification is performed using the XGBoost algorithm because it provides high accuracy and time efficiency based on previous studies. However, these advantages depend on the combination of its hyperparameters set. XGBoost has a large number of hyperparameters, which can be time-consuming for researchers to manually configure. Therefore, this study proposes combining XGBoost with the Harris Hawks Optimization (HHO) algorithm for hyperparameter tuning. HHO is implemented with a hawk population of 40 and maximum iterations set at 25. The proposed XGBoost-HHO model provides an average performance of 99.34% for accuracy, MAR, MAP and 99.33% for Macro F1-score. These performances are achieved with the shortest processing time across 25 experiments compared to other combination models. The performance of the XGBoost-HHO model shows more significant increase in performance and reduction in overfitting compared to the standard XGBoost, SVM, RF models, as well as several other combined models including RF-HHO, SVM-HHO, XGBoost-PSO, and XGBoost-BA. Additionally, based on the feature importance analysis of the XGBoost-HHO algorithm, Alanine Aminotransferase (ALT), Protein, and Gamma-glutamyltransferase (GGT) contribute the most to the classification process, with gain values of 11.21, 9.51, and 7.98, respectively. Overall, the findings of this study show that the XGBoost-HHO algorithm combination provides competitive performance and can serve as an excellent alternative for liver cirrhosis classification in terms of both accuracy and time efficiency.
Development of Human Activity Recognition (HAR) for Health Rehabilitation Using MMWAVE Radar with 3D Point Cloud Data Yudha Setyawan, Raden Rofiq; Fiky Y. Suratman; Khilda Afifah
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 2 (2025): April
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i2.642

Abstract

Postoperative recovery is a crucial phase in ensuring successful rehabilitation. However, many healthcare facilities face challenges due to the limited availability of medical personnel, making routine patient monitoring difficult. This limitation can delay the early detection of complications and reduce overall recovery effectiveness. To address this issue, this study proposes a non-invasive, radar-based system for remote postoperative patient monitoring. The proposed system utilizes the IWR6843AOP radar to generate 3D point cloud data, spatially representing patient movements. This approach enables continuous monitoring without compromising patient privacy, allowing healthcare providers to offer more efficient care. The collected data undergoes preprocessing, including normalization, labeling, and dataset splitting, before being classified using deep learning models such as 3D CNN, 3D CNN+LSTM, 3D CNN+Bi-LSTM, PointNet, PointNet++, and RNN. The dataset consists of six activity categories: empty space, sitting, standing, walking, running, and squatting, recorded at a frame frequency of 18.18 Hz. Experimental results show that the 3D CNN combined with Bi-LSTM achieves the highest accuracy of 90%, surpassing models like PointNet and RNN. These findings indicate that a radar-based and deep learning-driven approach offers an accurate, efficient, and non-intrusive solution for postoperative monitoring, reducing the need for direct medical supervision. This technology has significant potential for broader healthcare applications, contributing to more advanced, accessible, and technology-driven patient monitoring systems. By integrating artificial intelligence and radar sensing, this research paves the way for innovative solutions in modern healthcare, ensuring better postoperative outcomes while optimizing medical resources.
A Novel Deep Learning Framework for Enhanced Glaucoma Detection Using Attention-Gated U-Net, Deep Wavelet Scattering, and Vision Transformers V, Krishnamoorthy; S, Sivanantham; V, Akshaya; S, Nivedha; Depuru, Sivakumar; M, Manikandan
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 2 (2025): April
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i2.706

Abstract

Globally, Glaucoma is a major cause of permanent blindness, and maintaining eyesight depends on early detection. Here, a brand-new deep-learning system for glaucoma prediction. In this work, we offer a novel deep-learning approach for enhanced glaucoma prediction that uses a denoising generative adversarial network for preprocessing the input image is provided, later the segmentation is carried out by Attention-Gated U-Net with Dilated Convolutions to segment the optic cup and optic disc. Feature Extraction Using a Deep Wavelet Scattering Network and finally the glaucoma classification is carried out by the Vision Transformers. An attention-gated U-Net with dilated convolutions for segmentation, which improves the accuracy of optic disc and cup boundaries by 7% compared to conventional U-Net methods is introduced. A Deep Wavelet Scattering Network (DWSN) for feature extraction that achieves a 5% improvement in feature discrimination over conventional CNNs by capturing multiscale texture and structural information is suggested. Lastly, ViT, which is based on transfer learning, is used for classification; it has a 94.6% accuracy rate, a 93.8% sensitivity rate, and a 95.2% specificity rate. The suggested approach outperformed CNN-based models by improving by about 4% on all criteria. The system achieved an F1 score of 0.95 and an AUC (Area Under Curve) of 0.96 when tested on publicly accessible glaucoma datasets. Multi-stage deep-learning processing for glaucoma prediction by integrating a denoising generative adversarial network for image preprocessing, Attention-Gated U-Net with Dilated Convolutions for exact optic cup and disc segmentation, deep wavelet scattering for feature extraction, and Vision Transformers for glaucoma classification.
Hybrid Fuzzy Logic and Metaheuristic Optimized Trinetfusion Model for Liver Tumor Segmentation Mohammed Ashik; Patrick, Arun; D. Dennis Ebenezer; Rini Chowdhury; Prashant Kumar; Ida, S. Jhansi
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 2 (2025): April
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i2.657

Abstract

Liver tumor segmentation plays a vital role in medical imaging, enabling accurate diagnosis and precise treatment planning for liver cancer. Traditional methods such as threshold-based techniques and region-growing algorithms have been explored, and more recently, deep learning models have shown promise in automating and improving segmentation tasks. However, these approaches often face significant limitations, including challenges in accurately delineating tumor boundaries, high sensitivity to noise, and the risk of overfitting, especially when dealing with complex tumor structures and limited annotated data. To overcome these limitations, a novel Hybrid Fuzzy Logic and Metaheuristic Optimized TriNetFusion Model is proposed. This model integrates the strengths of fuzzy logic, metaheuristic optimization, and deep learning to deliver a more reliable and adaptable segmentation framework. Fuzzy logic is utilized to handle the inherent uncertainty and ambiguity in medical images, particularly in tumor boundary regions where intensity variations are subtle and complex. Metaheuristic optimization algorithms are employed to fine-tune the parameters of the segmentation model effectively, ensuring a more generalized and adaptive performance across different datasets. At the core of the model lies TriNetFusion, a multi-branch deep learning architecture that fuses complementary features extracted at various levels. The fusion of these multi-level features contributes to robust segmentation by capturing both global and local image characteristics. This model is specifically designed to adapt to irregular and complex tumor shapes, significantly reducing false positives and improving boundary precision. Experimental validation using benchmark liver tumor datasets demonstrates that the proposed model achieves a segmentation accuracy of 96% with a low loss value of 0.2, indicating strong generalization without overfitting. The hybrid approach not only enhances segmentation precision but also ensures robustness and adaptability, making it a highly promising solution for liver tumor segmentation in clinical practice.
Enhancing Skin Cancer Classification with Mixup Data Augmentation and Efficientnet D, Shamia; Umapriya, R.; Prasad, M. L. M.; Rini Chowdhury; Prashant Kumar; K.Vishnupriya
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 2 (2025): April
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i2.703

Abstract

Skin lesion classification and segmentation are two crucial tasks in dermatological diagnosis, here automated approaches can significantly aid in early detection and improve treatment planning. The proposed work presents a comprehensive framework that integrates K-means clustering for segmentation, Mixup augmentation for data enhancement, and the EfficientNet B7 model for classification. Initially, K-means clustering is applied as a pre-processing step to accurately segment the lesion regions from the background, ensuring that the model focuses on processing the most relevant and informative features. This segmentation enhances the model’s ability to differentiate between subtle lesion boundaries and surrounding skin textures. To address the common issue of class imbalance and to improve the overall robustness of the classification model, Mixup augmentation is employed. This technique generates synthetic samples by linearly interpolating between pairs of images and their corresponding labels, effectively enriching the training dataset and promoting better generalization. For the classification task, EfficientNet B7 is utilized due to its superior feature extraction capabilities, optimized scalability, and excellent performance across various image recognition challenges. The entire pipeline was evaluated on a dataset comprising 10,015 dermatoscopic images covering seven distinct categories of skin lesions. The proposed method achieved outstanding performance, demonstrating a precision rate of 95.3% and maintaining a low loss of 0.2 during evaluation. Compared to traditional machine learning and earlier deep learning approaches, the proposed framework showed significant improvements, particularly in handling complex patterns and imbalanced datasets, making it a promising solution for real-world clinical deployment in dermatology.

Page 3 of 3 | Total Record : 25