cover
Contact Name
Triwiyanto
Contact Email
triwiyanto123@gmail.com
Phone
+628155126883
Journal Mail Official
editorial.jeeemi@gmail.com
Editorial Address
Department of Electromedical Engineering, Poltekkes Kemenkes Surabaya Jl. Pucang Jajar Timur No. 10, Surabaya, Indonesia
Location
Kota surabaya,
Jawa timur
INDONESIA
Journal of Electronics, Electromedical Engineering, and Medical Informatics
ISSN : -     EISSN : 26568632     DOI : https://doi.org/10.35882/jeeemi
The Journal of Electronics, Electromedical Engineering, and Medical Informatics (JEEEMI) is a peer-reviewed open-access journal. The journal invites scientists and engineers throughout the world to exchange and disseminate theoretical and practice-oriented topics which covers three (3) majors areas of research that includes 1) Electronics, 2) Biomedical Engineering, and 3)Medical Informatics (emphasize on hardware and software design). Submitted papers must be written in English for an initial review stage by editors and further review process by a minimum of two reviewers.
Articles 295 Documents
Medical Image Segmentation Using a Global Context-Aware and Progressive Channel-Split Fusion U-Net with Integrated Attention Mechanisms Widhayaka, Alfath Roziq; Prasetyo, Heri
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 8 No 1 (2026): January
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v8i1.1371

Abstract

Medical image segmentation serves as a key component in Computer-Aided Diagnosis (CAD) systems across various imaging modalities. However, the task remains challenging because many images have low contrast and high lesion variability, and many clinical environments require efficient models. This study proposes CFCSE-Net, a U-Net-based model that builds upon X-UNet as a baseline for the CFGC and CSPF modules. This model incorporates a modified CFGC module with added Ghost Modules in the encoder, a CSPF module in the decoder, and Enhanced Parallel Attention (EPA) in the skip connections. The main contribution of this paper is the design of a lightweight architecture that combines multi-scale feature extraction with an attention mechanism to maintain low model complexity and increase segmentation accuracy. We train and evaluate CFCSE-Net on four public datasets: Kvasir-SEG, CVC-ClinicDB, BUSI (resized to 256 × 256 pixels), and PH2 (resized to 320 × 320 pixels), with data augmentation applied. We report segmentation performance as the mean ± standard deviation of IoU, DSC, and accuracy across three random seeds. CFCSE-Net achieves 79.78% ± 1.99 IoU, 87.21% ± 1.72 DSC, and 96.70% ± 0.59 accuracy on Kvasir-SEG, 88.11% ± 0.86 IoU, 93.42% ± 0.55 DSC, and 99.04% ± 0.09 accuracy on CVC-ClinicDB, 69.33% ± 2.66 IoU, 78.80% ± 2.65 DSC, and 96.30% ± 0.51 accuracy on BUSI, and 92.27% ± 0.52 IoU, 95.92% ± 0.30 DSC, and 98.06% ± 0.16 accuracy on PH2. Despite its strong performance, the model remains compact with 909,901 parameters and low computational cost, requiring 3.24 GFLOPs for 256 × 256 inputs and 5.07 GFLOPs for 320 × 320 inputs. These results show that CFCSE-Net maintains stable performance on polyp, breast ultrasound, and skin lesion segmentation while it stays compact enough for CAD systems on hardware with low computational resources.
Hybrid Swarm-Driven Vision Transformer (HSViT) for Lung Cancer Segmentation and Classification from CT Scans V, Kavithamani; Kavya, V.; Suganthi, R.; S., Yuvaraj; Monisha, P.; Arun Patrick
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 8 No 1 (2026): January
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v8i1.1384

Abstract

Lung cancer segmentation and classification from computed tomography (CT) images play a vital role in early diagnosis, prognosis assessment, and effective treatment planning. Despite significant progress in medical image analysis, accurate lung lesion analysis remains highly challenging due to overlapping anatomical structures, heterogeneous tissue intensity distributions, irregular and complex tumor shapes, and poorly defined lesion boundaries. These factors often limit the reliability and generalization capability of conventional deep learning models when applied to real-world clinical data. To address these challenges, this paper proposes a Hybrid Swarm-Driven Vision Transformer (HSViT) framework that synergistically combines swarm intelligence with transformer-based deep learning. The processing pipeline begins with Contrast Limited Adaptive Histogram Equalization (CLAHE), which enhances local contrast while suppressing noise amplification, thereby improving the visibility of subtle pulmonary nodules and lesion regions. Subsequently, a U-Net segmentation model optimized using the Coyote Optimization Algorithm (COA) is employed to accurately delineate lung lesions. COA, a swarm-based metaheuristic, adaptively fine-tunes U-Net parameters, enabling improved convergence and more precise boundary detection compared to gradient-based optimization alone. Following segmentation, discriminative lesion features are extracted and passed to the HSViT classifier. The proposed classifier integrates a Dual-Stage Attention Fusion (DSAF) mechanism, which effectively captures both fine-grained local spatial features and long-range global contextual dependencies. The framework achieves a Dice Coefficient of 0.95, an overall classification accuracy of 98.7%, and a minimized training loss of 0.04. These results highlight the strong potential of HSViT for reliable automated lung cancer diagnosis and for supporting clinical decision-making systems in real-world healthcare environments.
Optimizing Input Window Length and Feature Requirements for Machine Learning-Based Postprandial Hyperglycemia Prediction Maulana, Muhammad Rafly Alfarizqy; Indriani, Fatma; Abadi, Friska; Kartini, Dwi; Mazdadi, Muhammad Itqan
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 8 No 1 (2026): January
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v8i1.1401

Abstract

Continuous glucose monitoring systems currently generate alerts only after blood glucose thresholds are breached, limiting their utility for proactive diabetes management. Predicting postprandial glucose excursions before they occur requires determining the optimal amount of historical data and identifying which features contribute most to prediction accuracy. This study systematically evaluates how the length of the pre-meal observation window and feature composition affect machine-learning predictions of hyperglycemia events 60 minutes after eating. We analyzed 1,642 meal events from 45 adults wearing continuous glucose sensors, constructing features from pre-meal glucose trajectories, meal macronutrients, time of day, and health status. Four observation windows (15, 30, 45, 60 minutes) and three feature sets (all features, glucose-only, meal-only) were evaluated using Random Forest, XGBoost, and CatBoost with 5-fold group cross-validation. CatBoost with a 30-minute window achieved the best performance: 72.6% F1-macro, 79.6% accuracy, and 64.0% recall for hyperglycemia detection. Extending windows beyond 30 minutes did not yield consistent benefits, whereas 15-minute windows yielded comparable results. Glucose trajectory features alone retained 94% of full model performance (68.5% F1-macro), whereas meal composition alone proved insufficient (59.4% F1-macro). These findings demonstrate that recent glucose history dominates short-term prediction, enabling practical real-time systems with minimal data requirements. A 30-minute observation window with glucose and meal features offers an effective balance between prediction accuracy and system responsiveness.
Classification of Ultrasound Images Using ResNet-50 with a Convolutional Block Attention Module (CBAM) Afif, Bagus Tegar Zahir; Wiharto, Wiharto; Salamah, Umi
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 8 No 1 (2026): January
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v8i1.1406

Abstract

Liver fibrosis staging is a crucial component in the clinical management of chronic liver disease because it directly affects prognosis, therapeutic decision-making, and long-term patient monitoring. Ultrasound imaging is widely used as a noninvasive diagnostic modality due to its safety, low cost, and broad accessibility. Nevertheless, ultrasound-based fibrosis assessment remains challenging because liver parenchymal echotexture often exhibits low contrast, speckle noise, and subtle inter-stage variations, particularly among adjacent METAVIR stages. These characteristics frequently limit the effectiveness of conventional convolutional neural networks, which tend to emphasize dominant global patterns while suppressing weak but clinically meaningful texture cues. This study presents a task-oriented integration of a Convolutional Block Attention Module into a ResNet-50 backbone to enhance feature discrimination for five-stage liver fibrosis classification using heterogeneous B-mode ultrasound images. Rather than introducing a new attention mechanism, the contribution lies in the systematic insertion of CBAM after residual outputs across multiple network stages, enabling repeated channel and spatial recalibration from low-level texture descriptors to higher-level semantic representations. To further improve robustness and reduce prediction variance, a stratified 5-fold training strategy is combined with logit-level ensemble inference, where logits from independently trained fold models are averaged prior to Softmax normalization. Experiments were conducted on a publicly available dataset comprising 6,323 ultrasound images acquired from two tertiary hospitals using multiple ultrasound systems, with fibrosis stages labeled from F0 to F4 according to histopathology-based METAVIR scoring. The proposed framework achieves a test accuracy of 98.34%and consistently high precision, recall, and F1 scores across all fibrosis stages, with the most pronounced improvement observed for intermediate stages. Statistical analysis based on paired fold-wise comparisons confirms that the performance gain over the baseline ResNet 50 model is statistically significant. These results demonstrate that combining lightweight attention-based feature refinement with logit ensemble inference effectively addresses the inherent challenges of ultrasound-based liver fibrosis staging and provides a reliable noninvasive decision support framework with strong potential for clinical application and future multicenter validation.
A Comparative Analysis of SMOTE and ADASYN for Cervical Cancer Detection using XGBoost with MICE Imputation Ramadhan, Mita Azzahra; Saragih, Triando Hamonangan; Kartini, Dwi; Muliadi, Muliadi; Mazdadi, Muhammad Itqan
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 8 No 1 (2026): January
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v8i1.1415

Abstract

Cervical cancer remains a significant global health burden for women, with approximately 660,000 new cases and 350,000 associated deaths recorded worldwide in 2022. Machine learning methods have shown great promise in advancing timely detection and accurate diagnosis. This investigation compares two widely used oversampling strategies, Synthetic Minority Oversampling Technique (SMOTE) and Adaptive Synthetic Sampling (ADASYN), applied to cervical cancer identification via the XGBoost classifier, paired with Multiple Imputation by Chained Equations (MICE) to handle incomplete data. The dataset consists of cervical cancer risk factors with four diagnostic outcomes: Hinselmann, Schiller, Cytology, and Biopsy, which are treated as independent binary classification tasks rather than a single multilabel classification problem. The process began by preparing a dataset of cervical cancer risk factors through MICE imputation, then applying SMOTE and ADASYN to address class imbalance. The XGBoost model is optimized using Random Search hyperparameter tuning and evaluated across train-test split ratios (50:50, 60:40, 70:30, 80:20, and 90:10) using accuracy, precision (macro, micro, weighted), recall (macro, micro, weighted), F1-score (macro, micro, weighted), and AUC metrics. The results indicated that the XGBoost setup with MICE and SMOTE outperformed the others, achieving 97.1% accuracy, 97.1% mic-precision, 97.1% mic-recall, 97.1% mic-F1, and 97.1% AUC. Meanwhile, the ADASYN-integrated model showed marginally lower results, with 95.4% accuracy, 95.4% micro-precision, 95.4% micro-recall, 95.4% micro-F1, and 55.5% AUC. SMOTE proved more adept at creating evenly distributed synthetic data for the underrepresented group. Overall, this work underscores the value of integrating MICE imputation, SMOTE oversampling, and tuned XGBoost as a reliable approach for cervical cancer detection. These insights pave the way for automated screening tools that can bolster clinical judgment and improve early diagnosis outcomes.
Dengue Risk Stratification in Semarang City Using a Gaussian Mixture Model Based on Multi-Dimensional Urban Indicators Izzatil Ismah, Nabila; Fahmi, Amiq
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 8 No 1 (2026): January
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v8i1.1426

Abstract

Dengue fever remains a pressing public health challenge in major Indonesian cities, including Semarang. The complex interplay of heterogeneous demographic structures and built-environment characteristics generates spatially uneven transmission risks, while conventional risk-mapping approaches often fail to capture the probabilistic nature of these risks at fine-scale administrative levels, limiting their utility for targeted interventions. This study aims to develop a robust, replicable framework for dengue risk stratification that more accurately identifies localized high-risk areas and supports evidence-based public health decision-making. The research introduces a probabilistic clustering approach using Gaussian Mixture Models (GMM) to move beyond rigid partitioning methods, while simultaneously integrating multi-year incidence data (2021–2024) with eighteen multidimensional urban indicators across 177 sub-districts (kelurahan). This combined contribution advances methodological rigor by accommodating overlapping data distributions and probabilistic cluster memberships, and provides a nuanced, evidence-driven tool for stratifying dengue risk and guiding hyper-local interventions. Several GMM configurations were evaluated using the Bayesian Information Criterion (BIC) to determine the optimal number of clusters. The BIC value declined markedly when the number of clusters increased from two to three, indicating a substantial improvement in model fit. Further increases yielded only marginal gains, and the lowest BIC was achieved at three clusters, representing the most parsimonious and effective solution. Internal validation confirmed that the cluster structure robustly captured epidemiological variance despite the inherent heterogeneity of urban spatial data. Cluster 2 emerged as a critical high-risk epicenter, geographically limited yet characterized by consistently elevated incidence, pronounced temporal variability, and extreme values. The proposed GMM-based framework demonstrates that dengue risk in Semarang is concentrated within localized foci of heightened vulnerability rather than uniformly distributed. Ultimately, the methodology is replicable in other complex tropical urban environments, thereby strengthening both academic rigor and practical public health decision-making
Mental Health Detection Expert System Model Based on DASS-42 Using Fuzzy Inference System Rahmat, Eko Ginanjar Basuki; Wiharto, Wiharto; Salamah, Umi
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 8 No 1 (2026): January
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v8i1.1443

Abstract

Mental health disorders such as depression, anxiety, and stress frequently co-occur and exhibit overlapping symptoms, making accurate diagnosis challenging due to the subjective nature of psychological assessments. Conventional use of the Depression Anxiety Stress Scales (DASS-42) relies on rigid score aggregation, while many machine learning approaches fail to adequately represent uncertainty and expert reasoning. This study aims to develop an expert system for mental health detection by integrating fuzzy logic with expert knowledge derived from the DASS-42 instrument. The main contribution of this research is a hybrid knowledge-based framework that combines decision tree–based rule extraction with psychological expert validation, ensuring both interpretability and clinical relevance. The proposed method employs a Fuzzy Inference System (FIS) using triangular and trapezoidal membership functions to model symptom intensity as linguistic variables, followed by rule generation using the CART decision tree algorithm and expert refinement. System performance is evaluated using Cohen’s Kappa coefficient, including standard error and 95% confidence intervals, to measure inter-rater reliability between the expert system, the DASS instrument, and two human experts. The results indicate that the expert system achieves almost perfect agreement in identifying dominant psychological conditions, with an average Kappa value of 0.918. For severity-level classification, strong agreement is observed for depression (Kappa = 0.842) and stress (Kappa = 0.811), while anxiety severity shows moderate-to-substantial agreement (Kappa = 0.648), reflecting inherent variability in expert interpretation. In conclusion, the proposed FIS-based expert system effectively captures expert diagnostic reasoning and outperforms decision tree–only models, demonstrating strong potential as an interpretable and reliable mental health screening tool.
Hybrid Separable Conv-ViT–CheXNet with Explainable Localization for Pneumonia Diagnosis Khushboo Trivedi; Thacker, Chintan Bhupeshbhai
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 8 No 2 (2026): April
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v8i2.1262

Abstract

This research presents a robust, interpretable, and computationally efficient deep learning framework for multiclass pneumonia classification from chest X-ray images, with a strong emphasis on diagnostic accuracy, model transparency, and real-time applicability in clinical settings. We propose SCViT-CheXNet, a novel hybrid architecture that integrates a Separable Convolution Vision Transformer (SCViT) with a simplified CheXNet backbone based on DenseNet121 to achieve efficient spatial feature extraction, hierarchical representation learning, and faster model convergence. The use of separable convolution significantly reduces computational complexity while preserving discriminative feature learning, and the transformer module effectively captures long-range dependencies in radiographic patterns. To address the critical issue of class imbalance inherent in medical imaging datasets, an Auxiliary Classifier Deep Convolutional Generative Adversarial Network (ADCGAN) is employed to generate synthetic samples for underrepresented pneumonia categories, thereby enhancing data diversity and improving model generalization. The proposed framework is extensively evaluated on two benchmark datasets: Dataset-1, consisting of Normal, Viral, Bacterial, and Fungal Pneumonia cases, and Dataset-2, comprising Normal, Viral Pneumonia, COVID-19, and Lung Opacity classes. Model interpretability is ensured through Gradient-weighted Class Activation Mapping (Grad-CAM), which enables visualization of disease-specific regions in chest X-ray images and validates the clinical relevance of the learned representations. Experimental results demonstrate that SCViT-CheXNet consistently outperforms existing convolutional neural network and transformer-based approaches, achieving 99% accuracy, precision, recall, and F1-score across both datasets. The synergistic integration of separable convolution, transformer-based feature modeling, and GAN-driven data augmentation results in a lightweight yet highly accurate and interpretable diagnostic system. Overall, the SCViT-CheXNet framework shows strong potential for deployment in automated pneumonia and COVID-19 screening systems, offering reliable support for real-time clinical decision-making and contributing to improved patient outcomes.
Heavy–Light Soft-Vote Fusion of EEG Heatmaps for Autism Spectrum Disorder Detection Melinda, Melinda; Gazali, Syahrul; Away, Yuwaldi; Rafiki, Aufa; Wong, W.K; Muliyadi, Muliyadi; Rusdiana, Siti
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 8 No 1 (2026): January
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v8i1.1377

Abstract

Autism spectrum disorder is a neurodevelopmental condition that affects social communication and behaviour, and diagnosis still relies on subjective behavioural assessment. Electroencephalography provides a noninvasive view of brain activity but is noisy and often analysed with handcrafted features or evaluation schemes that risk data leakage. This study proposes a deep learning pipeline that combines wavelet denoising, EEG-to-image encoding, and heavy-light decision fusion for autism detection from EEG. Sixteen-channel EEG from children and adolescents with autism and typically developing peers in the KAU dataset is denoised using discrete wavelet transform shrinkage, segmented into fixed 4 second windows, and rendered as pseudo colour heatmaps. These images are used to fine-tune five ImageNet pretrained architectures under a unified training protocol with 5-fold cross-validation. Heavy-light fusion combines one heavyweight backbone and one lightweight backbone through weighted soft voting on class posterior probabilities. The strongest single model, ConvNeXt Tiny, attains about 97.25 percent accuracy and 97.10 percent F1 score at the window level. The best heavy light pair, ConvNeXt plus ShuffleNet, reaches about 99.56 percent accuracy and 99.53 percent F1, with sensitivity and specificity in the 99 percent range. Fusion mainly reduces missed ASD windows without increasing false alarms, indicating complementary error patterns between heavy and light models. These findings show that the proposed denoise encode classify pipeline with heavy light fusion yields more robust autism EEG classification than individual backbones and can support EEG-based decision support in autism screening.
A Multimodal Explainable-AI Approach for Deep-Learning-based Epileptic Seizure Detection Patil, Ashwini; Patil, Megharani
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 8 No 2 (2026): April
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v8i2.1380

Abstract

Epilepsy carries a high risk of sudden death and increased premature mortality, highlighting the importance of automatic seizure detection to support faster diagnosis and treatment. The opacity of existing deep learning models limits their real-world application in diagnosing epileptic seizures, underscoring the need for more transparent and explainable systems. Limited research studies are available on Explainable Artificial Intelligence (XAI)-based epileptic seizure detection, and these studies provide only a visual explanation for the model’s behaviour. Additionally, these studies lack validation of the XAI outputs using quantitative measures. Thus, this research aims to develop an explainable epileptic seizure detection model to address the limitations of existing black-box deep learning approaches. It proposes a novel Hybrid Transformer-DenseNet121-XAI (HTD-MXAI) integrated model for detecting epileptic seizures from EEG data. The proposed model leverages advanced deep learning architectures, namely the Transformer and DenseNet121, for automatic feature extraction, while simultaneously extracting handcrafted features from the time, frequency, and spatial domains. The XAI techniques, such as Attention Weights, Saliency Maps, and SHapley Additive eXplanations (SHAP), are integrated with the proposed model to provide multimodal explainability for the model’s decision-making process. The results demonstrate that the proposed model outperforms state-of-the-art models for seizure detection. It achieves an overall (aggregated across subjects) accuracy of 99.14%, Sensitivity of 98.49%, and Specificity of 99.68% when applied to the CHB-MIT dataset. The Faithfulness score of 40.94% and completeness score of 1.00 indicate that the explanations provided by the XAI method for the model’s prediction are highly reliable. In conclusion, the proposed model offers a promising solution to the constraints, including the interpretability of black box models, limited multimodal explainability, and the validation of XAI techniques in the context of epileptic seizure detection.