cover
Contact Name
Triwiyanto
Contact Email
triwiyanto123@gmail.com
Phone
+628155126883
Journal Mail Official
editorial.jeeemi@gmail.com
Editorial Address
Department of Electromedical Engineering, Poltekkes Kemenkes Surabaya Jl. Pucang Jajar Timur No. 10, Surabaya, Indonesia
Location
Kota surabaya,
Jawa timur
INDONESIA
Journal of Electronics, Electromedical Engineering, and Medical Informatics
ISSN : -     EISSN : 26568632     DOI : https://doi.org/10.35882/jeeemi
The Journal of Electronics, Electromedical Engineering, and Medical Informatics (JEEEMI) is a peer-reviewed open-access journal. The journal invites scientists and engineers throughout the world to exchange and disseminate theoretical and practice-oriented topics which covers three (3) majors areas of research that includes 1) Electronics, 2) Biomedical Engineering, and 3)Medical Informatics (emphasize on hardware and software design). Submitted papers must be written in English for an initial review stage by editors and further review process by a minimum of two reviewers.
Articles 287 Documents
Enhancing Deep Learning Model Using Whale Optimization Algorithm on Brain Tumor MRI Winarno, Winarno; Harjoko, Agus
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 8 No 1 (2026): January
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v8i1.941

Abstract

The increasing prevalence of brain cancer has emerged as a significant global health issue, with brain neoplasms, particularly gliomas, presenting considerable diagnostic and therapeutic obstacles. The timely and precise identification of such tumors is crucial for improving patient outcomes. This investigation explores the advancement of Convolutional Neural Networks (CNNs) for detecting brain tumors using MRI data, incorporating the Whale Optimization Algorithm (WOA) for the automated tuning of hyperparameters. Moreover, two callbacks, ReduceLROnPlateau and early stopping, were utilized to augment training efficacy and model resilience. The proposed model exhibited exceptional performance across all tumor categories. Specifically, the precision, recall, and F1-scores for Glioma were recorded as 0.997, 0.980, and 0.988, respectively; for meningioma, as 0.983, 0.986, and 0.984; for no tumors, as 0.998, 0.998, and 0.998; and for pituitary, as 0.997, 0.997, and 0.997. The mean performance metrics attained were 0.994 for precision, 0.990 for recall, and 0.992 for F1-score. The overall accuracy of the model was determined to be 0.991. Notably, incorporating callbacks within the CNN architecture improved accuracy to 0.994. Furthermore, when synergized with the WOA, the CNN-WOA model achieved a maximum accuracy of 0.996. This advancement highlights the effectiveness of integrating adaptive learning methodologies with metaheuristic optimization techniques. The findings suggest that the model sustains high classification accuracy across diverse tumor types and exhibits stability and robustness throughout training. The amalgamation of callbacks and the Whale Optimization Algorithm significantly bolster CNN performance in classifying brain tumors. These advancements contribute to the development of more reliable diagnostic instruments in medical imaging
Impact of Different Kernels on Breast Cancer Severity Prediction Using Support Vector Machine Mahmudah, Kunti; Surono, Sugiyarto; Rusmining, Rusmining; Indriani, Fatma
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 8 No 1 (2026): January
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v8i1.960

Abstract

Breast cancer poses a critical global health challenge and continues to be one of the most prevalent causes of cancer-related deaths among women worldwide. Accurate and early classification of cancer severity is essential for improving treatment outcomes and guiding clinical decision-making, since timely intervention can significantly reduce mortality rates and enhance patient survival. This study evaluates the performance of Support Vector Machine (SVM) models using different kernel functions of Linear, Polynomial, Radial Basis Function (RBF), and Sigmoid for breast cancer severity prediction. The impact of feature selection was also examined, using the Random Forest algorithm to select the top features based on Mean Decrease Accuracy (MDA), which serves to reduce redundancy, improve interpretability, and enhance model efficiency. Experimental results show that the RBF kernel consistently outperformed other kernels, especially in terms of sensitivity, a critical metric in medical diagnostics that emphasizes the ability of the model to identify positive cases correctly. Without feature selection, the RBF kernel achieved an accuracy of 0.9744, a sensitivity of 0.9772, a precision of 0.9722, and an AUC of 0.9968, indicating strong performance across all evaluation metrics. After applying feature selection, the RBF kernel further improved the accuracy to 0.9754, the sensitivity to 0.9770, the precision to 0.9742, and the AUC to 0.9975, which demonstrated enhanced generalization and reduced overfitting, highlighting the benefits of targeted feature reduction. While the Polynomial kernel yielded the highest precision (up to 0.9799), its lower sensitivity (as low as 0.9237) indicates a greater risk of false negatives, which is particularly concerning in cancer detection. These findings underscore the importance of optimizing both kernel function and feature selection. The RBF kernel, when combined with targeted feature selection, offers the most balanced and sensitive model, making it highly suitable for breast cancer classification tasks where diagnostic accuracy is vital
CVAE-ADS: A Deep Learning Framework for Traffic Accident Detection and Video Summarization Chauhan, Ankita; Vegad, Sudhir
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 8 No 1 (2026): January
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v8i1.1139

Abstract

Since it is a manual process of monitoring to identify accidents, it is becoming more and more difficult and results in human error, because of the rapid increase in road traffic and surveillance video. This underscores the urgent need for robust, automated systems capable of identifying accidents, as well as the burden of summarizing long videos. In order to address this issue, we propose CVAE-ADS, which is an unsupervised Approach that not only detects anomalies but also summarizes keyframes of a video to monitor traffic. This method operates in two phases. The stage of detecting Abnormalities intraffic video is performed using a Convolutional Variational Autoencoder, which operates on normal frames and identifies anomalies based on reconstruction errors. The second stage is the clustering of the perceived anomalous frames in the latent space, followed by the selection of representative keyframes to form a summary video. We tested the method with two benchmark datasets, namely, the IITH Accident Dataset and a subset of UCF-Crime. The findings have shown that the proposed approach had great accuracy of accident detection and AUC of 90.61 and 87.95 on IITH and UCF-Crime respectively and low rebuilding error and Equal Error Rates. To summarize, the method achieves substantial frame reduction and produces high visual quality with a wide variety of keyframes. It is able to measure up to 85 reduction rates with coverage of 92.5 on the IITH dataset and 80 reduction rates with coverage of 90 on an Accident subset of the UCF-Crime Dataset. CVAE-ADS offers a lightweight version of constant traffic monitoring, which utilizes limited organizational capital to categorize coincidences in real-time and recapitulate video footage of the accidents
Improving the Segmentation of Colorectal Cancer from Histopathological Images Using a Hybrid Deep Learning Pipeline: A Case Study Idiri, Fahima; MEZIANE, Farid; BOUCHAL, Hakim
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 8 No 1 (2026): January
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v8i1.1158

Abstract

Early and precise diagnosis of colorectal cancer plays a crucial role in enhancing patients' outcomes. Although histopathological assessment remains the reference standard for diagnosis, it is often lengthy and subject to variability between pathologists. This study aims to develop and evaluate a hybrid deep learning-based approach for the automated segmentation of Hematoxylin and Eosin-stained colorectal histopathology images. The work investigates how preprocessing strategies and architectural design choices influence the model’s ability to identify meaningful tissue patterns while preserving computational efficiency. Furthermore, it demonstrates the integration of a deep learning-based segmentation module into colorectal cancer diagnostic workflows. Several deep learning–based segmentation models with varying architectural configurations were trained and evaluated using a publicly available endoscopic biopsy histopathological hematoxylin and eosin image dataset. Preprocessing procedures were applied to generate computationally efficient image representations, thereby improving training stability and overall segmentation performance. The best-performing configuration achieved a segmentation accuracy of 0.97, reflecting consistent and reliable performance across samples. It accurately delineated cancerous tissue boundaries and effectively distinguished benign from malignant regions, demonstrating sensitivity to fine morphological details relevant to diagnosis. Strong agreement between predicted and expert-annotated regions confirmed the model’s reliability and alignment with expert assessments. Minimal overfitting was observed, indicating stable training behavior and robust generalization across different colorectal tissue types. In comparative evaluations, the model maintained high accuracy across all cancer categories and outperformed existing state-of-the-art approaches. Overall, these findings demonstrate the model’s robustness, efficiency, and adaptability, confirming that careful architectural and preprocessing optimization can substantially enhance segmentation quality and diagnostic reliability. The proposed approach can support pathologists by providing accurate tissue segmentation, streamlining diagnostic procedures, and improving clinical decision-making. This study underscores the value of optimized deep learning models as intelligent decision-support tools for efficient and consistent colorectal cancer diagnosis
EPR-Stego: Quality-Preserving Steganographic Framework for Securing Electronic Patient Records Safitri, Wardatul Amalia; Arsyad, Hammuda; Croix, Ntivuguruzwa Jean De La; Ahmad, Tohari; Batamuliza, Jennifer; Basori, Ahmad Hoirul
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 8 No 1 (2026): January
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v8i1.1172

Abstract

Secure medical data transmission is a fundamental requirement in telemedicine, where information is often exchanged over public networks. Protecting patient confidentiality and ensuring data integrity are crucial, particularly when sensitive medical records are involved. Steganography, an information hiding technique, offers a promising solution by embedding confidential data within medical images. This approach not only safeguards privacy but also supports authentication processes, ensuring that patient information remains secure during transmission. This study introduces EPR-Stego, a novel steganographic framework designed specifically for embedding electronic patient record (EPR) data in medical images. The key innovation of EPR-Stego lies in its mathematical strategy to minimize pixel intensity differences between neighboring pixels. By reducing usable pixel variations, the framework generates a stego image that is visually indistinguishable from the original, thereby enhancing imperceptibility while preserving diagnostic quality. Additionally, the method produces a key table, required by the recipient to accurately extract the embedded data, which further strengthens security against unauthorized access. The design of EPR-Stego aims to prevent attackers from easily detecting the presence of hidden medical information, mitigating the risk of targeted breaches. Experimental evaluations demonstrate its effectiveness, with the proposed approach achieving Peak Signal to Noise Ratio (PSNR) values between 51.71 dB and 75.59 dB, and Structural Similarity Index Measure (SSIM) scores reaching up to 0.99. These metrics confirm that the stego images maintain high visual fidelity and diagnostic reliability. Overall, EPR-Stego outperforms several existing techniques, offering a robust and secure solution for medical data transmission. By combining imperceptibility, security, and quality preservation, the framework addresses the pressing need for reliable protection of patient information in telemedicine environments
Deep Learning-Based Lung Sound Classification Using Mel-Spectrogram Features for Early Detection of Respiratory Diseases Yabani, Midfai; Faisal, Mohammad Reza; Indriani, Fatma; Nugrahadi, Dodon Turianto; Kartini, Dwi; Satou, Kenji
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 8 No 1 (2026): January
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v8i1.1256

Abstract

Respiratory diseases such as asthma, chronic obstructive pulmonary disease, and pneumonia remain among the leading causes of death globally. Traditional diagnostic approaches, including auscultation, rely heavily on the subjective expertise of medical practitioners and the quality of the instruments used. Recent advancements in artificial intelligence offer promising alternatives for automated lung sound analysis. However, audio is an unstructured data format that must be converted into a suitable format for AI algorithms. Another significant challenge lies in the imbalanced class distribution within available datasets, which can adversely affect classification performance and model reliability. This study applied several comprehensive preprocessing techniques, including random undersampling to address data imbalance, resampling audio at 4000 Hz for standardization, and standardizing audio duration to 2.7 seconds for consistency. Feature extraction was then performed using the Mel Spectrogram method, converting audio signals into image representations to serve as input for classification algorithms based on deep learning architectures. To determine optimal performance characteristics, various Convolutional Neural Network (CNN) architectures were systematically evaluated, including LeNet-5, AlexNet, VGG-16, VGG-19, ResNet-50, and ResNet-152. VGG-16 achieved the highest classification accuracy of the tested models at 75.5%, demonstrating superior performance in respiratory sound classification tasks. This study demonstrates the potential of AI-based lung sound classification systems as a complementary diagnostic tool for healthcare professionals and the general public in supporting early identification of respiratory abnormalities and diseases. The findings suggest that automated lung sound analysis could enhance diagnostic accessibility and provide more valuable support for clinical decision-making in respiratory healthcare applications
CIT-LieDetect: A Robust Deep Learning Framework for EEG-Based Deception Detection Using Concealed Information Test Nagale, Tanmayi; Khandare, Anand
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 8 No 1 (2026): January
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v8i1.1300

Abstract

Deception detection with electroencephalography (EEG) is still an open problem as a result of inter-individual variability of brain activity and neural dynamics of deceitful responses. Traditional methods fail to perform well in terms of consistent generalization, and as a result, research has ahifted towards exploring sophisticated deep learning methods for Concealed Information Tests (CIT). The objective of the present study is to categorize subjects as guilty or innocent based on EEG measurements and rigorously test model performance in terms of accuracy, sensitivity, and specificity. To achieve this, experiments were conducted on two EEG datasets: the LieWaves dataset, consisting of 27 subjects recorded with five channels (AF3, T7, Pz, T8, AF4), and the CIT dataset, comprising 79 subjects recorded with 16 channels (Fp1, Fp2, F3, F4, C3, C4, Cz, P3, P4, Pz, O1, O2, T3/T7, T4/T8, T5/P7, T6/P8). Preprocessing involved a band-pass filter for noise reduction, followed by feature extraction using the Discrete Wavelet Transform (DWT) and the Fast Fourier Transform (FFT). Three models were evaluated: FBC-EEGNet, InceptionTime-light, and their ensemble. Results indicate that InceptionTime-light achieved the highest accuracy of 79.2% on the CIT dataset, surpassing FBC-EEGNet (70.8%). On the LieWaves dataset, FBC-EEGNet achieved superior performance, with 71.6% accuracy, compared with InceptionTime-light (65.93%). In terms of specificity, FBC-EEGNet reached 93.7% on the CIT dataset, while InceptionTime-light demonstrated balanced performance with 62.5% sensitivity and 87.5% specificity. Notably, the ensemble model provided stable and generalizable outcomes, yielding 70.8% accuracy, 62.5% sensitivity, and 75% specificity on the CIT dataset, confirming its robustness across subject groups. In conclusion, FBC-EEGNet is effective for maximizing specificity, InceptionTime-light achieves higher accuracy, and the ensemble model delivers a balanced trade-off. The implications of this work are to advance reliable EEG-based deception detection and to set the stage for future research on explainable and interpretable models, validated on larger and more diverse datasets.
Optimized Multi-Resolution Attention-Based Architecture for Effective Diabetic Skin Lesion Classification Jaleesha, B. K.; Suganthi, Suganthi; Priyadharsini, N. K.; S., Yuvaraj; Pyingkodi, M.; Vallikkannu, M.
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 8 No 1 (2026): January
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v8i1.1320

Abstract

Early and reliable identification of diabetic skin complications, including ischemia and infection, is essential for timely clinical intervention and prevention of severe outcomes. Nevertheless, traditional deep learning models often exhibit limited generalization capability and high computational demands, particularly when distinguishing between visually subtle infection types. To overcome these challenges, this study introduces an end-to-end deep learning architecture termed the Enhanced Multi-Resolution Multi-Path Attention Network (EMRMP-Net), specifically designed for robust diabetic lesion classification. A key contribution of this work is the introduction of a trainable attention-based fusion mechanism that adaptively learns to weight and integrate multi-resolution feature maps, enhancing contextual understanding and discriminative performance. To address the prevalent issue of class imbalance in medical imaging datasets, EMRMP-Net utilizes focal loss and domain-tailored data augmentation, thereby promoting stable learning and improved representation of minority classes. Additionally, a shared classification head across multiple resolution pathways enables joint feature optimization, reducing computational redundancy and improving learning efficiency compared to traditional MRMP models. Comprehensive experiments on the publicly available Diabetic Foot Ulcer (DFU) dataset demonstrate that EMRMP-Net surpasses existing state-of-the-art-methods, achieving 98.12% accuracy and 98.14% F1-score for ischemia detection, and 95.27% accuracy with 93.68% F1-score for infection classification. Overall, EMRMP-Net provides a highly effective, computationally efficient, and generalizable framework for automated diabetic skin lesion analysis, demonstrating strong potential for real-world clinical applications. EMRMP-Net is designed as a general framework for diabetic skin lesion analysis, capable of handling diverse lesion characteristics through multi-resolution and attention-based feature learning. However, in this work, the model is explicitly formulated, trained, and evaluated for the clinically critical binary classification task of distinguishing ischemic ulcers from infected ulcers within DFU imagery.
Optimized Metaheuristic Integrated Neuro-Fuzzy Deep Learning Framework for EEG-Based Lie Detection Nagale, Tanmayi; Khandare, Anand
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 8 No 1 (2026): January
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v8i1.1340

Abstract

EEG-based deception detection remains challenging due to three critical limitations: high inter-subject variability, which restricts generalization, the black-box nature of deep learning models that undermines forensic interpretability, and substantial computational overhead arising from high-dimensional multi-channel EEG data. Although recent state-of-the-art approaches report accuracies of 82–88%, they fail to provide the transparency required for legal and forensic admissibility. To address these limitations, this study aims to develop an accurate, computationally efficient, and explainable EEG-based deception detection framework suitable for real-world forensic applications. The primary contribution of this work is a novel hybrid neuro-fuzzy architecture that jointly integrates intelligent channel selection, complementary deep feature learning, and transparent fuzzy reasoning, enabling high performance without sacrificing interpretability. The proposed framework follows a five-stage pipeline: (1) intelligent channel selection using Type-2 fuzzy inference with ANFIS-based ranking and multi-objective evolutionary optimization (MOEA/D), reducing EEG dimensionality from 64 to 14 channels (78.1% reduction); (2) dual-path deep learning that combines EEGNet for spatial–temporal feature extraction with InceptionTime-Light for multi-scale temporal representations; (3) a fuzzy attention mechanism to generate interpretable feature importance weights; (4) an ANFIS-based classifier employing Takagi–Sugeno fuzzy rules for transparent decision-making; and (5) triple-level interpretability through channel importance visualization, attention-weighted features, and extractable linguistic rules. The framework is evaluated on two benchmark datasets, such as LieWaves (27 subjects, 5-channel EEG) and the Concealed Information Test (CIT) dataset (79 subjects, 16-channel EEG). Experimental results demonstrate superior performance, achieving 93.8% accuracy on LieWaves and 92.7% on the CIT dataset, representing an improvement of 5.3 % points over the previous best-performing methods, while maintaining balanced sensitivity (92.4%) and specificity (95.2%). In conclusion, this work establishes that neuro-fuzzy integration can simultaneously achieve high classification accuracy, computational efficiency, and forensic-grade explainability, thereby advancing the practical deployment of EEG-based deception detection systems in real-world forensic applications.
HALF-MAFUNET: A Lightweight Architecture Based on Multi-Scale Adaptive Fusion for Medical Image Segmentation Maula Sandy, Abiaz Fazel; Prasetyo, Heri
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 8 No 1 (2026): January
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v8i1.1357

Abstract

Medical image segmentation is a critical component in computer-aided diagnosis systems but many deep learning models still require large numbers of parameters and heavy computation. Classical CNN-based architectures such as U-Net and its variants achieve good accuracy, but are often too heavy for real deployment. Meanwhile, modern Transformer-based or Mamba-based models capture long-range information but typically increase model complexity. Because of these limitations, there is still a need for a lightweight segmentation model that can provide a good balance between accuracy and efficiency across different types of medical images. This paper proposes Half-MAFUNet, a lightweight architecture based on multi-scale adaptive fusion and designed as a simplified version of MAFUNet. The main contribution of this work is combining the efficient encoder structure of Half-UNet with advanced fusion and attention mechanisms. Half-MAFUNet integrates Hierarchy Aware Mamba (HAM) for global feature modelling, Multi-Scale Adaptive Fusion (MAF) to combine global and local information, and two attention modules, Adaptive Channel Attention (ACA) and Adaptive Spatial Attention (ASA), to refine skip connections. In addition, this model incorporates Channel Atrous Spatial Pyramid Pooling (CASPP) to capture multi-scale receptive fields efficiently without increasing computational cost. Together, these components create a compact architecture that maintains strong representational power. The model is trained and evaluated on three public datasets: CVC-ClinicDB for colorectal polyp segmentation, BUSI for breast tumor segmentation, and ISIC-2018 for skin lesion segmentation. All images are resized to 256×256 pixels and processed using geometric and intensity-based augmentations. Half-MAFUNet achieves competitive performance, obtaining mean IoU around 84 85% and Dice/F1-Score around 90 92% across datasets, while using significantly fewer parameters and GFLOPs compared to U-Net, Att-UNet, UNeXt, MALUNet, LightM-UNet, VM-UNet, and UD-Mamba. These results show that Half-MAFUNet provides accurate and efficient medical image segmentation, making it suitable for real-world deployment on devices with limited computational resources.