cover
Contact Name
Triwiyanto
Contact Email
triwiyanto123@gmail.com
Phone
+628155126883
Journal Mail Official
editorial.jeeemi@gmail.com
Editorial Address
Department of Electromedical Engineering, Poltekkes Kemenkes Surabaya Jl. Pucang Jajar Timur No. 10, Surabaya, Indonesia
Location
Kota surabaya,
Jawa timur
INDONESIA
Journal of Electronics, Electromedical Engineering, and Medical Informatics
ISSN : -     EISSN : 26568632     DOI : https://doi.org/10.35882/jeeemi
The Journal of Electronics, Electromedical Engineering, and Medical Informatics (JEEEMI) is a peer-reviewed open-access journal. The journal invites scientists and engineers throughout the world to exchange and disseminate theoretical and practice-oriented topics which covers three (3) majors areas of research that includes 1) Electronics, 2) Biomedical Engineering, and 3)Medical Informatics (emphasize on hardware and software design). Submitted papers must be written in English for an initial review stage by editors and further review process by a minimum of two reviewers.
Articles 25 Documents
Search results for , issue "Vol 7 No 3 (2025): July" : 25 Documents clear
Addressing Intrinsic Data Characteristics Issues of Imbalance Medical Data Using Nature Inspired Percolation Clustering Siddavatam, Kaikashan; Shinde, Subhash
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 3 (2025): July
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i3.835

Abstract

Data on diseases are generally skewed towards either positive or negative cases, depending on their prevalence. The problem of imbalance can significantly impact the performance of classification models, resulting in biased predictions and reduced model accuracy for the underrepresented class. Other factors that affect the performance of classifiers include intrinsic data characteristics, such as noise, outliers, and within-class imbalance, which complicate the learning task. Contemporary imbalance handling techniques employ clustering with SMOTE (Synthetic Minority Oversampling Technique) to generate realistic synthetic data that preserves the underlying data distribution, generalizes unseen data and mitigates overfitting to noisy points. Centroid-based clustering methods (e.g., K-means) often produce synthetic samples that are too clustered or poorly spaced. At the same time, density-based methods (e.g., DBSCAN) may fail to generate sufficient meaningful synthetic samples in sparse regions. The work aims to develop nature-inspired clustering that, combined with SMOTE, generates synthetic samples that adhere to the underlying data distribution and maintain sparsity among the data points that enhance performance of classifier. We propose PC-SMOTE, which leverages Percolation Clustering (PC), a novel clustering algorithm inspired by percolation theory. The methodology of PC utilizes a connectivity-driven framework to effectively handle irregular cluster shapes, varying densities, and sparse minority instances. The experiment was designed using a hybrid approach to assess PC-SMOTE using synthetically generated data with variable spread and other parameters; second, the algorithm was evaluated on eight sets of real medical datasets. The results show that the PC-SMOTE method works excellently for the Breast cancer dataset, Parkinson's dataset, and Cervical cancer dataset, where AUC is in the range of 96% to 99%, which is high compared to the other two methods. This demonstrates the effectiveness of the PC-SMOTE algorithm in handling datasets with both low and high imbalance ratios and often demonstrates competitive or superior performance compared to K-means and DBSCAN combined with SMOTE in terms of AUC, F1-score, G-mean, and PR-AUC.
Exploring Dataset Variability in Diabetic Retinopathy Classification Using Transfer Learning Approaches Patni, Kinjal; Shruti Yagnik; Pratik Patel
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 3 (2025): July
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i3.838

Abstract

Diabetic retinopathy (DR) stands as a primary international cause of vision impairment that needs effective and swift diagnostic services to protect eye structures from advancing deterioration. The variations of imaging data that appear between sources create major obstacles for achieving consistent performance from models. The elimination of performance fluctuation problems during DR classifications across two benchmark datasets EYE-PACS and APTOS is examined through systematic transfer learning analysis using different high-performing CNN architectures including VGG16, VGG19, ResNet50, Xception, InceptionV3, MobileNetV2, and InceptionResNetV2. The research evaluates how data heterogeneity affects and how augmentation approaches impact the accuracy while stabilizing robustness in deep learning models. The research provides new insights through its extensive investigation of generalization performance based on dataset changes which utilize modified data augmentation methods for retinal images. A collection of data transformations such as rotation, flipping, zooming and brightness modifications create simulated realistic scenarios to handle imbalanced data classes. Academic research involved CNN pre-training followed by transfer learning on both databases while researchers evaluated the models through both untreated source data and augmented image testing procedures. InceptionResNetV2 outperformed its counterparts with 96.2% accuracy and Xception delivered 95.7% accuracy in APTOS evaluation and both models scored 95.9% and 95.4% respectively on EYE-PACS testing. When augmentation was applied it increased the performance level by 3% to 5% across all running models. The experimental outcomes demonstrate how adequate variable training allows these models to recognize datasets regardless of their heterogeneity. This analysis confirms that combining reliable deep learning structures with purposeful data enhancement techniques substantially enhances DR diagnosis reliability to build scalable future diagnostic solutions for ophthalmology practice.
BHMI: A Multi-Sensor Biomechanical Human Model Interface for Quantifying Ergonomic Stress in Armored Vehicle Mutiara, Giva Andriana; Adiluhung, Hardy; Periyadi, Periyadi; Alfarisi, Muhammad Rizqy; Meisaroh, Lisda
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 3 (2025): July
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i3.877

Abstract

Ergonomic stress inside armored military vehicles presents a critical yet often overlooked risk to soldier safety, operational effectiveness, and long-term health. Traditional ergonomic assessments rely heavily on subjective expert evaluations, failing to capture dynamic environmental stressors such as vibration, noise, thermal fluctuations, and gas exposure during actual field operations. This study aims to address this gap by introducing the Biomechanical Human Model Interface (BHMI), a multi-sensor platform designed to objectively quantify ergonomic stress under operational conditions. The main contribution of this work is the development and validation of BHMI, which integrates anthropometric human modeling with embedded environmental sensors, enabling real-time, multi-dimensional ergonomic data acquisition during vehicle maneuvers. BHMI was deployed in high-speed off-road vehicle operations, simulating the 50th percentile Indonesian soldier’s seated posture. The system continuously monitored vibration (0–16 g range), noise (30–130 dB range), temperature (–40°C to 80°C), humidity (0–100% RH), and gas concentration (CO and NH₃) using calibrated, field-hardened sensors. Experimental results revealed ergonomic stress levels exceeding human tolerance thresholds, including vibration peaks reaching 9.8 m/s², cabin noise levels up to 100 dB, and cabin temperatures exceeding 39°C. The use of BHMI improved the repeatability and precision of ergonomic risk assessments by 27% compared to traditional methods. Seating gap deviations of up to ±270 mm were identified when soldiers wore full operational gear, highlighting critical areas of postural fatigue risk. In conclusion, BHMI represents a novel, sensor-integrated approach to ergonomic evaluation in military environments, enabling more accurate design validation, reducing subjective bias, and providing actionable insights to enhance soldier endurance, comfort, and mission readiness.
Dual Attention and Channel Atrous Spatial Pyramid Pooling Half-UNet for Polyp Segmentation Sarira, Beatrix Datu; Prasetyo, Heri
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 3 (2025): July
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i3.893

Abstract

Colorectal cancer (CRC) is a leading cause of cancer-related deaths, with two million cases detected in 2020 and causing one million deaths annually. Approximately 95% of CRC cases originate from colorectal adenomatous polyps. Early detection through accurate polyp segmentation is crucial for preventing and treating CRC effectively. While colonoscopy screening remains the primary detection method, its limitations have prompted the development of Computer-Aided Diagnostic (CAD) systems enhanced by deep learning models. This study proposes a novel neural network architecture called Dual Attention and Channel Atrous Spatial Pyramid Pooling Half-UNet (DACHalf-UNet) for medical polyp image segmentation that balances optimal performance with computational efficiency. The proposed model builds upon the U-Net framework by integrating Double Squeeze-and-Excitation (DSE) blocks in the encoder after the Ghost Module, Channel Atrous Spatial Pyramid Pooling (CASPP) in the bottleneck and decoder, and Attention Gate (AG) mechanisms within the architecture. DACHalf-UNet was trained and evaluated on the CVC-ClinicDB and Kvasir-SEG datasets for 70 epochs. Evaluations demonstrated superior performance with F1-Score and IoU values of 94.23% and 89.28% on CVC-ClinicDB, and 88.40% and 81.47% on Kvasir-SEG, respectively. Comparative analysis showed that DACHalf-UNet outperforms existing architectures including U-Net, U-Net++, ResU-Net, AGU-Net, CSAP-UNet, PRCNet, UNeXt, and UNeSt. Notably, the model achieves this performance with only 0.56 million trainable parameters and 30.29 GFLOPs, significantly reducing computational complexity compared to previous methods. These results demonstrate that DACHalf-UNet effectively addresses the need for accurate and efficient polyp segmentation, potentially enhancing CAD systems and contributing to improved CRC detection and treatment outcomes.
Performance Comparison of Extreme Learning Machine (ELM) and Hierarchical Extreme Learning Machine (H-ELM) Methods for Heart Failure Classification on Clinical Health Datasets Ichwan Dwi Nugraha; Triando Hamonangan Saragih; Irwan Budiman; Dwi Kartini; Fatma Indriani; Caesarendra, Wahyu
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 3 (2025): July
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i3.904

Abstract

Heart failure is one of the leading causes of death worldwide and requires accurate and timely diagnosis to improve patient outcomes. However, early detection remains a significant challenge due to the complexity of clinical data, high dimensionality of features, and variability in patient conditions. Traditional clinical methods often fall short in identifying subtle patterns that indicate early stages of heart failure, motivating the need for intelligent computational techniques to support diagnostic decisions. This study aims to enhance predictive modeling for heart failure classification by comparing two supervised machine learning approaches: Extreme Learning Machine (ELM) and Hierarchical Extreme Learning Machine (HELM). The main contribution of this research is the empirical evaluation of HELM's performance improvements over conventional ELM using 10-fold cross-validation on a publicly available clinical dataset. Unlike traditional neural networks, ELM offers fast training by randomly assigning weights and analytically computing output connections, while HELM extends this with a multi-layer structure that allows for more complex feature representation and improved generalization. Both models were assessed based on classification accuracy and Area Under the Curve (AUC), two critical metrics in medical classification tasks. The ELM model achieved an accuracy of 73.95% ± 8.07 and an AUC of 0.7614 ± 0.093, whereas the HELM model obtained a comparable accuracy of 73.55% ± 7.85 but with a higher AUC of 0.7776 ± 0.085. In several validation folds, HELM outperformed ELM, notably reaching 90% accuracy and 0.9250 AUC in specific cases. In conclusion, HELM demonstrates improved robustness and discriminatory capability in identifying heart failure cases. These findings suggest that HELM is a promising candidate for implementation in clinical decision support systems. Future research may incorporate feature selection, hyperparameter optimization, and evaluation across multi-center datasets to improve generalizability and real-world applicability.
AMIN-CNN: Enhancing Brain Tumor Segmentation through Modality-Aware Normalization and Deep Learning Depuru, Sivakumar; Kumar, M. Sunil
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 3 (2025): July
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i3.934

Abstract

Accurate segmentation of reliable brain tumor detection is essential for early diagnosis and treatment, which helps to increase patient survival rates. However, the inherent variability in tumor shape, size, and intensity across different MRI modalities makes automated segmentation a challenging task. Traditional deep learning approaches, such as U-Net and its variants, provide robust results but often struggle with modality-specific inconsistencies and generalization across diverse datasets. This research presented AMIN-CNN, an adaptive multimodal invariant normalization incorporating a novel 3D convolutional neural network to improve brain tumors segmentation across various MRI technologies. Through adaptive normalization, AMIN-CNN covers modality-specific differences more effectively than Basic CNN and U-Net, leading to improved integration of multimodal MRI input data. The model maintains strong learning performance with minimal overfitting beyond epoch 50. Regularization techniques can reduce this. AMIN-CNN stands out with the best Dice Score (about 0.92 WT, 0.87 ET, and 0.89 TC), Precision (0.3), accuracy of 93.2 % and can decrease false positives. The lower Sensitivity in AMIN-CNN results in it finding the smaller but more correct tumor regions, making it more precise. Compared with traditional methods, AMIN-CNN demonstrates a competitive or better segmentation result and maintains computational efficiency. The model has demonstrated strong independence, with a Hausdorff Distance of 20, compared to 100 for other models. According to these test results, AMIN-CNN is the most effective and clinically correct method among the different architectures, mainly due to its high precision and ability to measure tumors with accuracy.
Advanced Deep Learning for Stroke Classification Using Multi-Slice CT Image Analysis Lezzar, Fouzi; Mili, Seif Eddine
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 3 (2025): July
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i3.947

Abstract

Brain stroke is a leading cause of mortality and disability globally, necessitating rapid and accurate diagnosis for timely intervention. While Computed Tomography (CT) imaging is the gold standard for stroke detection, manual interpretation is time-consuming, prone to error, and subject to inter-observer variability. Although deep learning models have shown promise in automating stroke detection, many rely on 2D analysis, ignore 3D spatial relationships, or require labour-intensive slice-level annotations, which limits their scalability and clinical applicability. To address these challenges, we propose MedHybridNet, a novel hybrid deep learning architecture that integrates convolutional neural networks (CNNs) for local feature extraction with Transformer-based modules to model global contextual dependencies across volumetric brain scans. Our main contribution is twofold: (1) the SliceAttention mechanism, which dynamically identifies diagnostically relevant slices using only patient-level labels, eliminating the need for costly slice-level annotations while enhancing interpretability through attention maps and Grad-CAM visualizations; and (2) a cGAN-based augmentation strategy that generates high-quality, pathology-informed synthetic CT slices to overcome data scarcity and class imbalance. The framework processes complete 3D brain volumes, leveraging both CNNs and Transformers in a dual-path design, and incorporates hierarchical attention for refined feature selection and classification. Evaluated via patient-wise 5-fold cross-validation on a real-world dataset of 2501 CT scans from 82 patients, MedHybridNet achieves an accuracy of 98.31%, outperforming existing methods under weak supervision. These results demonstrate its robustness, generalization capability, and superior interpretability. By combining architectural innovation with clinically relevant design choices, MedHybridNet advances the integration of Artificial Intelligence (AI) into real-world stroke care, offering a scalable, accurate, and explainable solution that can significantly improve diagnostic efficiency and patient outcomes in routine clinical practice.
Advanced Traffic Flow Optimization Using Hybrid Machine Learning and Deep Learning Techniques El Kaim Billah, Mohammed; Mabrouk, Abdelfettah
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 3 (2025): July
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i3.948

Abstract

Road traffic congestion remains a persistent and critical challenge in modern urban environments, adversely affecting travel times, fuel consumption, air quality, and overall urban livability. To address this issue, this study proposes a hybrid ensemble learning framework for accurate short-term traffic flow prediction across signalized urban intersections. The model integrates Random Forest, Gradient Boosting, and Multi-Layer Perceptron within a weighted voting ensemble mechanism, wherein model contributions are dynamically scaled based on individual validation performance. Benchmarking is performed against traditional and advanced baselines, including Linear Regression, Support Vector Regression, and Long Short-Term Memory (LSTM) networks. A real-world traffic dataset, comprising 56 consecutive days of readings from six intersections, is utilized to validate the approach. A robust preprocessing pipeline is implemented, encompassing anomaly detection, temporal feature engineering especially time-of-day and day-of-week normalization, and sliding window encoding to preserve temporal dependencies. Experimental evaluations on 4-intersection and 6-intersection scenarios reveal that the ensemble consistently outperforms all baselines, achieving a peak R² of 0.954 and an RMSE of 0.045. Statistical significance testing using Welch’s t-test confirms the reliability of these improvements. Furthermore, SHAP-based interpretability analysis reveals the dominant influence of temporal features during high-variance periods. While computational overhead and data sparsity during rare events remain limitations, the framework demonstrates strong applicability for deployment in smart traffic systems. Its predictive accuracy and model transparency make it a viable candidate for adaptive signal control, congestion mitigation, and urban mobility planning. Future work may explore real-time streaming adaptation, external event integration, and generalization across heterogeneous urban networks.
Improving Accuracy and Efficiency of Medical Image Segmentation Using One-Point-Five U-Net Architecture with Integrated Attention and Multi-Scale Mechanisms Fathur Rohman, Muhammad Anang; Prasetyo, Heri; Yudha, Ery Permana; Hsia, Chih-Hsien
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 3 (2025): July
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i3.949

Abstract

Medical image segmentation is essential for supporting computer-aided diagnosis (CAD) systems by enabling accurate identification of anatomical and pathological structures across various imaging modalities. However, automated medical image segmentation remains challenging due to low image contrast, significant anatomical variability, and the need for computational efficiency in clinical applications. Furthermore, the scarcity of annotated medical images due to high labelling costs and the requirement of expert knowledge further complicates the development of robust segmentation models. This study aims to address these challenges by proposing One-Point-Five U-Net, a novel deep learning architecture designed to improve segmentation accuracy while maintaining computational efficiency. The main contribution of this work lies in the integration of multiple advanced mechanisms into a compact architecture: ghost modules, Multi-scale Residual Attention (MRA), Enhanced Parallel Attention (EPA) in skip connections, the Convolutional Block Attention Module (CBAM), and Multi-scale Depthwise Convolution (MSDC) in the decoder. The proposed method was trained and evaluated on four public datasets: CVC-ClinicDB, Kvasir-SEG, BUSI, and ISIC2018. One-Point-Five U-Net achieved sensitivity, specificity, accuracy, DSC, and IoU of of 94.89%, 99.63%, 99.23%, 95.41%, and 91.27% on CVC-ClinicDB; 91.11%, 98.60%, 97.33%, 90.93%, and 83.84% on Kvasir-SEG; 85.35%, 98.65%, 96.81%, 87.02%, and 78.18% on BUSI; and 87.67%, 98.11%, 93.68%, 89.27%, and 83.06% on ISIC2018. These results outperform several state-of-the-art segmentation models. In conclusion, One-Point-Five U-Net demonstrates superior segmentation accuracy with only 626,755 parameters and 28.23 GFLOPs, making it a highly efficient and effective model for clinical implementation in medical image analysis.
Combination Of Gamma Correction and Vision Transformer In Lung Infection Classification On CT-Scan Images Kesuma, Lucky Indra; Octavia , Pipin; Sari , Purwita; Batubara, Gracia Mianda Caroline; Karina, Karina
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 3 (2025): July
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i3.588

Abstract

Lung infection is an inflammatory condition of the lungs with a high mortality rate. Lung infections can be identified using CT-Scan images, where the affected areas are analyzed to determine the infection type. However, manual interpretation of CT-Scan results by medical specialists is often time-consuming, subjective, and requires a high level of accuracy. To address these challenges, this study proposes an automated classification method for lung infections using deep learning techniques. Convolutional Neural Networks (CNNs) are widely used for image classification tasks. However, CNN operates locally with limited receptive fields, making capturing global patterns in complex lung CT images challenging. CNN also struggles to model long-range pixel dependencies, which is crucial for analyzing visually similar regions in lung CT-Scans. This study uses a Vision Transformer (ViT) to overcome CNN limitations. ViT employs self-attention mechanisms to capture global dependencies across the entire image. The main contribution of this study is the implementation of ViT to enhance classification performance in lung CT-Scan images by capturing complex and global image patterns that CNN fails to model. However, ViT requires a large dataset to perform optimally. To overcome these challenges, augmentation techniques such as flipping, rotation, and gamma correction are applied to increase the amount of data without altering the important features. The dataset comprises lung CT-scan images sourced from Kaggle and is divided into Covid and Non-Covid classes. The proposed method demonstrated excellent classification performance, achieving accuracy, sensitivity, specificity, precision, and F1-Score above 90%. Additionally, the Cohen’s kappa coefficient reached 89%. These results show that the proposed method effectively classifies lung infections using CT-Scan images and has strong potential as a clinical decision-support tool, particularly in reducing diagnostic time and improving consistency in medical evaluations.

Page 2 of 3 | Total Record : 25