cover
Contact Name
Triwiyanto
Contact Email
triwiyanto123@gmail.com
Phone
+628155126883
Journal Mail Official
editorial.jeeemi@gmail.com
Editorial Address
Department of Electromedical Engineering, Poltekkes Kemenkes Surabaya Jl. Pucang Jajar Timur No. 10, Surabaya, Indonesia
Location
Kota surabaya,
Jawa timur
INDONESIA
Journal of Electronics, Electromedical Engineering, and Medical Informatics
ISSN : -     EISSN : 26568632     DOI : https://doi.org/10.35882/jeeemi
The Journal of Electronics, Electromedical Engineering, and Medical Informatics (JEEEMI) is a peer-reviewed open-access journal. The journal invites scientists and engineers throughout the world to exchange and disseminate theoretical and practice-oriented topics which covers three (3) majors areas of research that includes 1) Electronics, 2) Biomedical Engineering, and 3)Medical Informatics (emphasize on hardware and software design). Submitted papers must be written in English for an initial review stage by editors and further review process by a minimum of two reviewers.
Articles 13 Documents
Search results for , issue "Vol 8 No 2 (2026): April" : 13 Documents clear
Hybrid Separable Conv-ViT–CheXNet with Explainable Localization for Pneumonia Diagnosis Khushboo Trivedi; Thacker, Chintan Bhupeshbhai
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 8 No 2 (2026): April
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v8i2.1262

Abstract

This research presents a robust, interpretable, and computationally efficient deep learning framework for multiclass pneumonia classification from chest X-ray images, with a strong emphasis on diagnostic accuracy, model transparency, and real-time applicability in clinical settings. We propose SCViT-CheXNet, a novel hybrid architecture that integrates a Separable Convolution Vision Transformer (SCViT) with a simplified CheXNet backbone based on DenseNet121 to achieve efficient spatial feature extraction, hierarchical representation learning, and faster model convergence. The use of separable convolution significantly reduces computational complexity while preserving discriminative feature learning, and the transformer module effectively captures long-range dependencies in radiographic patterns. To address the critical issue of class imbalance inherent in medical imaging datasets, an Auxiliary Classifier Deep Convolutional Generative Adversarial Network (ADCGAN) is employed to generate synthetic samples for underrepresented pneumonia categories, thereby enhancing data diversity and improving model generalization. The proposed framework is extensively evaluated on two benchmark datasets: Dataset-1, consisting of Normal, Viral, Bacterial, and Fungal Pneumonia cases, and Dataset-2, comprising Normal, Viral Pneumonia, COVID-19, and Lung Opacity classes. Model interpretability is ensured through Gradient-weighted Class Activation Mapping (Grad-CAM), which enables visualization of disease-specific regions in chest X-ray images and validates the clinical relevance of the learned representations. Experimental results demonstrate that SCViT-CheXNet consistently outperforms existing convolutional neural network and transformer-based approaches, achieving 99% accuracy, precision, recall, and F1-score across both datasets. The synergistic integration of separable convolution, transformer-based feature modeling, and GAN-driven data augmentation results in a lightweight yet highly accurate and interpretable diagnostic system. Overall, the SCViT-CheXNet framework shows strong potential for deployment in automated pneumonia and COVID-19 screening systems, offering reliable support for real-time clinical decision-making and contributing to improved patient outcomes.
A Multimodal Explainable-AI Approach for Deep-Learning-based Epileptic Seizure Detection Patil, Ashwini; Patil, Megharani
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 8 No 2 (2026): April
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v8i2.1380

Abstract

Epilepsy carries a high risk of sudden death and increased premature mortality, highlighting the importance of automatic seizure detection to support faster diagnosis and treatment. The opacity of existing deep learning models limits their real-world application in diagnosing epileptic seizures, underscoring the need for more transparent and explainable systems. Limited research studies are available on Explainable Artificial Intelligence (XAI)-based epileptic seizure detection, and these studies provide only a visual explanation for the model’s behaviour. Additionally, these studies lack validation of the XAI outputs using quantitative measures. Thus, this research aims to develop an explainable epileptic seizure detection model to address the limitations of existing black-box deep learning approaches. It proposes a novel Hybrid Transformer-DenseNet121-XAI (HTD-MXAI) integrated model for detecting epileptic seizures from EEG data. The proposed model leverages advanced deep learning architectures, namely the Transformer and DenseNet121, for automatic feature extraction, while simultaneously extracting handcrafted features from the time, frequency, and spatial domains. The XAI techniques, such as Attention Weights, Saliency Maps, and SHapley Additive eXplanations (SHAP), are integrated with the proposed model to provide multimodal explainability for the model’s decision-making process. The results demonstrate that the proposed model outperforms state-of-the-art models for seizure detection. It achieves an overall (aggregated across subjects) accuracy of 99.14%, Sensitivity of 98.49%, and Specificity of 99.68% when applied to the CHB-MIT dataset. The Faithfulness score of 40.94% and completeness score of 1.00 indicate that the explanations provided by the XAI method for the model’s prediction are highly reliable. In conclusion, the proposed model offers a promising solution to the constraints, including the interpretability of black box models, limited multimodal explainability, and the validation of XAI techniques in the context of epileptic seizure detection.
Optimized Recurrent Neural Network Based on Improved Bacterial Colony Optimization for Predicting Osteoporosis Diseases B, Sivasakthi; K, Preetha; D, Selvanayagi
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 8 No 2 (2026): April
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v8i2.1410

Abstract

Osteoporosis is a silent disease before significant fragility fractures despite its high prevalence, and its screening rate is low. In predictive healthcare analytics, the Elman recurrent neural network (ERNN) has been widely used as a learning technique. Traditional learning algorithms have some limitations, such as slow convergence rates and local minima that prevent gradient descent from finding the global minimum of the error function. The main goal is to precisely estimate each individual's risk of developing osteoporosis. These forecasts are essential for prompt diagnosis and treatment, which have a significant influence on patient outcomes. Hence, the present research focuses on making a more efficient prediction method based on an optimized Elman recurrent neural network (ERNN) for predicting osteoporosis diseases. An optimized ERNN method, IBCO-ERNN, improved bacterial colony optimization (IBCO) by optimizing the ERNN weights and biases. The IBCO approach uses an iterative local search (ILS) algorithm to enhance convergence rate and avoid the local optima problem of conventional BCO. Subsequently, the IBCO is used to optimize the ERNN's weights and biases, thereby improving convergence speed and detection rate. The effectiveness of IBCO-ERNN is evaluated using four different types of osteoporosis datasets: Femoral neck, Lumbar spine, Femoral and Spine, and BMD datasets. The proposed IBCO-ERNN produced higher accuracy at 95.61%, 96.26%, 97.26%, and 97.54 % for the Femoral neck, Lumbar spine, Femoral, and Spine datasets, respectively. The experimental findings demonstrated that, compared with other predictors, the proposed IBCO-ERNN achieved respectable accuracy and rapid convergence.
Impact of Optimizer Algorithm on NasNetMobile Model for Eight-class Retinal Disease Classification from OCT Images Selvarajan, Madhumithaa; M, Masoodhu Banu N.
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 8 No 2 (2026): April
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v8i2.1464

Abstract

Artificial intelligence (AI) is an emerging technology that plays a vital role in various fields, including the medical field. Ophthalmology is the earliest field to adopt AI for diagnosing several retinal diseases. Many imaging techniques are available, but Optical Coherence Tomography (OCT) is particularly useful for early-stage diagnosis. OCT is a non-invasive imaging method that offers high-resolution visualization of the retinal structure, aiding the ophthalmologist in differentiating between normal and abnormal retina. Automated OCT-based retinal disease classification using deep learning (DL) is important for early disease detection. Most DL models achieved high performance, but the influence of the optimizer on model behaviour, convergence, and explainability remains a challenge. To bridge the gap, this study evaluates the performance and convergence of five optimizers, such as RMSprop, AdamW, Adam, Nadam, and SGD, on the NasNetMobile model. The model was trained on the OCT-8 dataset, which comprises seven diseased retinal classes and one normal class of Optical Coherence Tomography (OCT) images. The seven diseases are Age-related Macular Degeneration (AMD), choroidal neovascularization (CNV), Central Serous retinopathy (CSR), diabetic macular edema (DME), diabetic retinopathy (DR), DRUSEN, and Macular Hole (MH). The study also analyzes convergence behaviour and explainability through early stopping regularization technique and GradCAM XAI, respectively. The model achieved 71%, 93%, 96%, 97%, and 97% of accuracy, respectively. Compared with other optimizers, the SGD optimizer achieved high accuracy in 22 epochs, which indicates better generalization. GradCAM XAI highlights the disease-relevant region across different retinal diseases. This framework emphasizes the significance of selecting an appropriate optimizer for robust retinal disease classification using a DL model trained on OCT images
MK–TripNet: A Deep Learning Framework for Real-Time Multi-Class Lung Sound Classification Erini, Widya Surya; Thomas, Gracia Putri; Badia, Giulia Salzano; Rahadian, Arief; Raharjo, Sofyan Budi; Wulandari, Sari Ayu
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 8 No 2 (2026): April
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v8i2.1403

Abstract

Respiratory diseases such as asthma, pneumonia, and Chronic Obstructive Pulmonary Disease (COPD) remain major global health challenges, particularly in resource-limited settings where access to pulmonary specialists and early diagnostic tools is limited. Automatic lung sound classifications have emerged as a promising non-invasive screening approach; however, existing methods often rely on single-scale feature extraction, conventional loss functions, and offline analysis, which limit their discriminative capability and real-time applicability. The aim of this study is to develop and evaluate a deep learning framework for real-time multi-class lung sound classifications that improves discriminative representation and temporal sensitivity. To address limitations, this study proposes MK-TripNet, a novel deep learning architecture designed to integrate multi-scale feature extraction, discriminative embedding learning, and real-time inference within a unified framework. The main contribution of this work is the unified integration of a Multi-Kernel convolutional architecture, Triplet Loss-based embedding learning, and Sliding Window segmentation within a single end-to-end framework, enabling accurate segment-level lung sound classifications in real-time scenarios. Unlike prior approaches, the proposed method simultaneously captures fine-grained temporal patterns and broader spectral characteristics while explicitly maximizing inter-class separability in the embedding space. The proposed model was evaluated using a newly constructed dataset comprising 1,409 lung sound segments obtained from primary digital stethoscope recordings and publicly available respiratory sound databases. Experimental results demonstrate that MK-TripNet consistently outperforms several strong baseline models, including CNN-BiGRU, CNN-BiGRU-UMAP, and VGGish-Triplet, achieving an accuracy of 89.1%, an F1-score of 0.89, and a recall of 0.88. Ablation studies further confirm that the combined use of Multi-Kernel convolution, Triplet Loss, and Sliding Window segmentation yields the most robust and generalizable performances. These findings highlight the clinical potential of MK-TripNet for real-time digital auscultation and point-of-care respiratory screening, particularly in resource-limited and telemedicine settings.
Multipoint Wrist Pulse Acquisition and Analysis by Combining HRV with Morphological Timing Features for Quantitative Identification of Ayurvedic Doshas Patel, Devendra; Patel, Mitul
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 8 No 2 (2026): April
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v8i2.1474

Abstract

Nadi Pariksha, the traditional Ayurvedic method of wrist pulse examination, posits that three adjacent radial artery locations corresponding to Vata, Pitta, and Kapha (V-P-K) reflect distinct physiological states. While recent sensor-based systems have attempted to digitize wrist pulse acquisition, many have emphasized hardware design or classification performance without rigorously validating physiological differences between pulse sites within the same individual. This study presents a quantitative evaluation of the multi-point principle of Nadi Pariksha using synchronized multi-site photoplethysmography (PPG) combined with integrated cardiovascular signal analysis. Pulse waveforms were simultaneously acquired from 39 participants, including 32 healthy individuals and 7 clinically characterized subjects, at the three classical radial artery locations. Morphological timing features and time-domain heart rate variability (HRV) metrics were extracted to characterize vascular dynamics and autonomic regulation. Within-subject statistical analysis demonstrated significant spatial differentiation across the pulse sites. Crest time decreased from 0.204 s at the Kapha site to 0.175 s at the Vata site (14.2% reduction), while systolic width decreased from 0.140 s to 0.109 s (22.1% reduction) (p ≤ 0.004). Non-parametric analysis confirmed significant differences in crest time (H = 9.15, p = 0.010), pulse width (H = 8.43, p = 0.015), systolic amplitude, systolic area, and HRV variability (SDNN: H = 6.33, p = 0.041), with moderate-to-large effect sizes (η² = 0.12–0.20). Clinically characterized cases exhibited deviations from this baseline pattern, including a 62% reduction in crest time gradient and a 72% increase in stiffness index in diabetes, and a 55% reduction in gradient with a 25% decrease in HRV during acute infection. Given the limited clinical sample (n = 7), these findings are interpreted as preliminary. Overall, the results provide quantitative within-subject evidence supporting the physiological distinctiveness of the V-P-K pulse locations and contribute toward the development of standardized, sensor-based Nadi Pariksha
Design and Statistical Evaluation of an AI-Enabled IoT-Based Non-Invasive Biosensing System for Diabetes Risk Screening Kamble, Prachi C.; Ragha, Lakshamappa; Pingle, Yogesh
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 8 No 2 (2026): April
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v8i2.1541

Abstract

Early identification of diabetes risk remains a significant challenge due to the invasive nature, recurring cost, and limited accessibility of conventional biochemical diagnostic tests. These limitations restrict continuous monitoring and hinder large-scale population screening, particularly in remote and resource-limited settings. The aim of this study is to design and statistically evaluate an AI-enabled IoT-based non-invasive biosensing system for diabetes risk screening, focusing on system-level engineering design, data integration, and performance validation rather than clinical diagnosis. In this study, the term “non-invasive” refers exclusively to externally measurable surface-level physiological and breath-based signals that do not require skin penetration, blood sampling, or subdermal sensor implantation. The main contributions of this work include the development of a wearable IoT-based non-invasive biosensing framework, integration of multi-modal physiological and breath-based biomarkers for risk assessment, implementation of an ensemble machine learning model for diabetes risk classification, and comprehensive statistical validation using agreement, reliability, and calibration metrics. The proposed DiaAssist system acquires physiological parameters such as heart rate, blood pressure, oxygen saturation, body temperature, physical activity indicators, and breath volatile organic compound acetone through a wearable IoT platform with edge-level preprocessing. Fused physiological and demographic features are processed using an ensemble learning framework to generate individualized diabetes risk scores. Performance evaluation was conducted on a single-center observational dataset comprising 625 records using paired statistical tests, agreement analysis, and calibration assessment. The optimized model achieved an accuracy of 99.7%, an area under the receiver operating characteristic curve of 1.000, a Cohen’s Kappa coefficient of 0.993, a Matthews correlation coefficient of 0.993, and a Brier score of 0.045, demonstrating strong classification reliability and probabilistic calibration. The results confirm that combining IoT-based non-invasive biosensing with ensemble machine learning enables accurate and reliable screening for diabetes risk. The proposed system provides a scalable, cost-effective, and engineering-oriented solution suitable for remote monitoring and preventive healthcare applications
A Hybrid Deep Ensemble Model for Precise Liver and Tumor Segmentation Using U-Net and W-Net Architectures B. Sravani; M. sunil Kumar
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 8 No 2 (2026): April
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v8i2.1089

Abstract

The identification of the liver with the hepatic tumors on the computed tomography (CT) scans is a major compulsion to the earliest diagnosis, treatment planning, and surgery in the case of hepatocellular carcinoma. However, automated segmentation is not an easy job due to the non-homogeneous appearance of tumors, blurry boundaries, small size of annotated datasets, and high inter-slice variability. Existing single deep learning models are known to suffer from prediction variance and low generalization in complex clinical conditions. The primary goal of the study is to develop an effective, highly accurate segmentation model that improves the accuracy, consistency, and explanability of liver and tumor borders in CT images. In this paper, an original hybrid deep ensemble model is proposed that leverages the advantages of U-Net and W-Net. This is the primary contribution; one can combine the strong spatial localization ability of U-Net and the reconstruction-driven unsupervised learning ability of W-Net in minimizing the variance and maximizing the generalization. In addition, soft probability fusion, uncertainty modelling, and entropy-based confidence estimation are also introduced to improve reliability and clinical interpretation. The preprocessing of CT images is performed mathematically by normalizing and resizing to 256x256. U-Net and W-Net are trained separately using the pixel-wise probability maps, which are soft-averaged and thresholded. Benchmark liver CT datasets are tested with the ensemble using the Dice coefficient, accuracy, precision, recall, F1-score, Intersection over Union (IoU), ROC-AUC, and statistical significance tests. The results of the experiment show that the suggested ensemble performs better with an accuracy of 95.4, a precision of 94.3, a recall of 93.9, an F1-score of 94.1, IoU of 89.8, and an average ROC-AUC of 0.9615 than the models of the U-Net and W-Net, which differ in a huge number. Statistical confirmation that the improvements are relevant (p < 0.01) will be provided. In summary, the proposed deep ensemble segmentation can accurately, reliably, and effectively segment the liver and tumor, showing strong potential for clinical use and subsequent extension to multi-organ and multi-modal medical imaging.
Ensemble Voting Method to Enhance the Performance of a Dental Caries Detection System using Convolutional Neural Network Putri Rizkiah; Maulisa Oktiana; Khairun Saddami; Maya Fitria; Fitri Arnia; Hubbul Walidainy; Yunida Yunida
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 8 No 2 (2026): April
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v8i2.1343

Abstract

Individual classification models for caries detection still face significant challenges, including limited accuracy and unstable predictions, which can hinder diagnosis, delay clinical decisions, and increase the risks associated with patient care. To overcome these limitations, this study proposes an ensemble voting method that combines five deep learning models, such as ResNet-152, MobileNetV2, InceptionV3, NASNetMobile, and EfficientNet-B5. This approach aims to enhance the accuracy and stability of caries detection by leveraging the complementary strengths of the individual models while mitigating their weaknesses. Each model was trained and tested on the same dataset of dental images, categorized into caries and regular classes. Their predictions were aggregated using hard and soft voting techniques. The ensemble's performance was evaluated using accuracy, precision, recall, and F1-score. The ensemble voting demonstrates a notable improvement in classification performance over individual models. Hard and soft voting have excellent classification performance and consistently outperform the best individual models. The accuracy increased from EfficientNetB5 0.8485 to 0.8864 and 0.8712, representing increases of 4.46% and 2.68%, respectively. The precision increased from MobileNetV2 0.8182 to 0.8493 and 0.8551, representing increases of 3.81% and 4.52%. For recall, EfficientNetB5 ranked highest among individual models with a score of 0.9242. Hard voting increased 1.64% to 0.9394, and soft voting decreased slightly by 3.28% to 0.8939. The F1 score of EfficientNetB5 is 0.8592. Hard and soft voting increased 3.83% and 1.73% to 0.8921 and 0.8741. The proposed ensemble improves the F1-score by 3.83 percentage points compared to the best individual model. The ensemble voting method effectively leverages the complementary strengths of each deep learning model to improve the stability and accuracy of fast, reliable dental caries early detection prediction.
HST-Net: Hierarchical Spectrum-Tokenization with Progressive Refinement for Cardiac MRI Segmentation Naga Chandrika Gogulamudi; Shamia D; V Kavithamani; Amitha Ida Chandran; K Venu; Kunchanapalli Rama Krishna
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 8 No 2 (2026): April
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v8i2.1485

Abstract

The accurate segmentation of cardiac structures from Magnetic Resonance Imaging (MRI) plays a vital role in quantitative ventricular assessment, functional analysis, and the clinical diagnosis of cardiovascular diseases. Precise delineation of cardiac components, such as the left ventricle, right ventricle, and myocardial wall, is essential for evaluating cardiac morphology and function. In recent years, transformer-based architectures, including TransUNet and Swin-UNet, have demonstrated strong capabilities in modeling long-range dependencies and capturing global contextual information. However, despite these advantages, they often struggle to preserve smooth anatomical geometry and achieve high-precision boundary delineation, particularly in the presence of large shape deformations and significant inter-subject variability commonly observed in cardiac MRI data. To overcome these limitations, a Hierarchical Spectrum-Tokenization Network (HST-Net) is proposed. The core idea of HST-Net is to represent cardiac anatomy at multiple levels of granularity, enabling a more robust structural understanding across varying spatial scales. The proposed architecture incorporates a novel approach called Spectrum Tokenization. This approach divides the latent representations into two parts, one containing low-frequency global tokens that capture context information, and another containing high-frequency boundary-aware tokens that capture the contours. By progressively enhancing boundary details, PSR significantly improves contour accuracy, especially for complex and thin structures. Experimental evaluations conducted on a cardiac MRI dataset demonstrate the effectiveness of the proposed approach. HST-Net achieves an average Dice coefficient of 91.6% and a pixel-wise segmentation accuracy of 94.8%. Compared to nnU-Net and Swin-UNet, it shows consistent performance gains, yielding improvements of 2.1–3.4% in Dice score and 1.9–2.6% in segmentation accuracy across different cardiac structures.

Page 1 of 2 | Total Record : 13