cover
Contact Name
Triwiyanto
Contact Email
triwiyanto123@gmail.com
Phone
+628155126883
Journal Mail Official
editorial.jeeemi@gmail.com
Editorial Address
Department of Electromedical Engineering, Poltekkes Kemenkes Surabaya Jl. Pucang Jajar Timur No. 10, Surabaya, Indonesia
Location
Kota surabaya,
Jawa timur
INDONESIA
Journal of Electronics, Electromedical Engineering, and Medical Informatics
ISSN : -     EISSN : 26568632     DOI : https://doi.org/10.35882/jeeemi
The Journal of Electronics, Electromedical Engineering, and Medical Informatics (JEEEMI) is a peer-reviewed open-access journal. The journal invites scientists and engineers throughout the world to exchange and disseminate theoretical and practice-oriented topics which covers three (3) majors areas of research that includes 1) Electronics, 2) Biomedical Engineering, and 3)Medical Informatics (emphasize on hardware and software design). Submitted papers must be written in English for an initial review stage by editors and further review process by a minimum of two reviewers.
Articles 25 Documents
Search results for , issue "Vol 8 No 1 (2026): January" : 25 Documents clear
Improving the Segmentation of Colorectal Cancer from Histopathological Images Using a Hybrid Deep Learning Pipeline: A Case Study Idiri, Fahima; MEZIANE, Farid; BOUCHAL, Hakim
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 8 No 1 (2026): January
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v8i1.1158

Abstract

Early and precise diagnosis of colorectal cancer plays a crucial role in enhancing patients' outcomes. Although histopathological assessment remains the reference standard for diagnosis, it is often lengthy and subject to variability between pathologists. This study aims to develop and evaluate a hybrid deep learning-based approach for the automated segmentation of Hematoxylin and Eosin-stained colorectal histopathology images. The work investigates how preprocessing strategies and architectural design choices influence the model’s ability to identify meaningful tissue patterns while preserving computational efficiency. Furthermore, it demonstrates the integration of a deep learning-based segmentation module into colorectal cancer diagnostic workflows. Several deep learning–based segmentation models with varying architectural configurations were trained and evaluated using a publicly available endoscopic biopsy histopathological hematoxylin and eosin image dataset. Preprocessing procedures were applied to generate computationally efficient image representations, thereby improving training stability and overall segmentation performance. The best-performing configuration achieved a segmentation accuracy of 0.97, reflecting consistent and reliable performance across samples. It accurately delineated cancerous tissue boundaries and effectively distinguished benign from malignant regions, demonstrating sensitivity to fine morphological details relevant to diagnosis. Strong agreement between predicted and expert-annotated regions confirmed the model’s reliability and alignment with expert assessments. Minimal overfitting was observed, indicating stable training behavior and robust generalization across different colorectal tissue types. In comparative evaluations, the model maintained high accuracy across all cancer categories and outperformed existing state-of-the-art approaches. Overall, these findings demonstrate the model’s robustness, efficiency, and adaptability, confirming that careful architectural and preprocessing optimization can substantially enhance segmentation quality and diagnostic reliability. The proposed approach can support pathologists by providing accurate tissue segmentation, streamlining diagnostic procedures, and improving clinical decision-making. This study underscores the value of optimized deep learning models as intelligent decision-support tools for efficient and consistent colorectal cancer diagnosis
EPR-Stego: Quality-Preserving Steganographic Framework for Securing Electronic Patient Records Safitri, Wardatul Amalia; Arsyad, Hammuda; Croix, Ntivuguruzwa Jean De La; Ahmad, Tohari; Batamuliza, Jennifer; Basori, Ahmad Hoirul
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 8 No 1 (2026): January
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v8i1.1172

Abstract

Secure medical data transmission is a fundamental requirement in telemedicine, where information is often exchanged over public networks. Protecting patient confidentiality and ensuring data integrity are crucial, particularly when sensitive medical records are involved. Steganography, an information hiding technique, offers a promising solution by embedding confidential data within medical images. This approach not only safeguards privacy but also supports authentication processes, ensuring that patient information remains secure during transmission. This study introduces EPR-Stego, a novel steganographic framework designed specifically for embedding electronic patient record (EPR) data in medical images. The key innovation of EPR-Stego lies in its mathematical strategy to minimize pixel intensity differences between neighboring pixels. By reducing usable pixel variations, the framework generates a stego image that is visually indistinguishable from the original, thereby enhancing imperceptibility while preserving diagnostic quality. Additionally, the method produces a key table, required by the recipient to accurately extract the embedded data, which further strengthens security against unauthorized access. The design of EPR-Stego aims to prevent attackers from easily detecting the presence of hidden medical information, mitigating the risk of targeted breaches. Experimental evaluations demonstrate its effectiveness, with the proposed approach achieving Peak Signal to Noise Ratio (PSNR) values between 51.71 dB and 75.59 dB, and Structural Similarity Index Measure (SSIM) scores reaching up to 0.99. These metrics confirm that the stego images maintain high visual fidelity and diagnostic reliability. Overall, EPR-Stego outperforms several existing techniques, offering a robust and secure solution for medical data transmission. By combining imperceptibility, security, and quality preservation, the framework addresses the pressing need for reliable protection of patient information in telemedicine environments
Deep Learning-Based Lung Sound Classification Using Mel-Spectrogram Features for Early Detection of Respiratory Diseases Yabani, Midfai; Faisal, Mohammad Reza; Indriani, Fatma; Nugrahadi, Dodon Turianto; Kartini, Dwi; Satou, Kenji
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 8 No 1 (2026): January
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v8i1.1256

Abstract

Respiratory diseases such as asthma, chronic obstructive pulmonary disease, and pneumonia remain among the leading causes of death globally. Traditional diagnostic approaches, including auscultation, rely heavily on the subjective expertise of medical practitioners and the quality of the instruments used. Recent advancements in artificial intelligence offer promising alternatives for automated lung sound analysis. However, audio is an unstructured data format that must be converted into a suitable format for AI algorithms. Another significant challenge lies in the imbalanced class distribution within available datasets, which can adversely affect classification performance and model reliability. This study applied several comprehensive preprocessing techniques, including random undersampling to address data imbalance, resampling audio at 4000 Hz for standardization, and standardizing audio duration to 2.7 seconds for consistency. Feature extraction was then performed using the Mel Spectrogram method, converting audio signals into image representations to serve as input for classification algorithms based on deep learning architectures. To determine optimal performance characteristics, various Convolutional Neural Network (CNN) architectures were systematically evaluated, including LeNet-5, AlexNet, VGG-16, VGG-19, ResNet-50, and ResNet-152. VGG-16 achieved the highest classification accuracy of the tested models at 75.5%, demonstrating superior performance in respiratory sound classification tasks. This study demonstrates the potential of AI-based lung sound classification systems as a complementary diagnostic tool for healthcare professionals and the general public in supporting early identification of respiratory abnormalities and diseases. The findings suggest that automated lung sound analysis could enhance diagnostic accessibility and provide more valuable support for clinical decision-making in respiratory healthcare applications
CIT-LieDetect: A Robust Deep Learning Framework for EEG-Based Deception Detection Using Concealed Information Test Nagale, Tanmayi; Khandare, Anand
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 8 No 1 (2026): January
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v8i1.1300

Abstract

Deception detection with electroencephalography (EEG) is still an open problem as a result of inter-individual variability of brain activity and neural dynamics of deceitful responses. Traditional methods fail to perform well in terms of consistent generalization, and as a result, research has ahifted towards exploring sophisticated deep learning methods for Concealed Information Tests (CIT). The objective of the present study is to categorize subjects as guilty or innocent based on EEG measurements and rigorously test model performance in terms of accuracy, sensitivity, and specificity. To achieve this, experiments were conducted on two EEG datasets: the LieWaves dataset, consisting of 27 subjects recorded with five channels (AF3, T7, Pz, T8, AF4), and the CIT dataset, comprising 79 subjects recorded with 16 channels (Fp1, Fp2, F3, F4, C3, C4, Cz, P3, P4, Pz, O1, O2, T3/T7, T4/T8, T5/P7, T6/P8). Preprocessing involved a band-pass filter for noise reduction, followed by feature extraction using the Discrete Wavelet Transform (DWT) and the Fast Fourier Transform (FFT). Three models were evaluated: FBC-EEGNet, InceptionTime-light, and their ensemble. Results indicate that InceptionTime-light achieved the highest accuracy of 79.2% on the CIT dataset, surpassing FBC-EEGNet (70.8%). On the LieWaves dataset, FBC-EEGNet achieved superior performance, with 71.6% accuracy, compared with InceptionTime-light (65.93%). In terms of specificity, FBC-EEGNet reached 93.7% on the CIT dataset, while InceptionTime-light demonstrated balanced performance with 62.5% sensitivity and 87.5% specificity. Notably, the ensemble model provided stable and generalizable outcomes, yielding 70.8% accuracy, 62.5% sensitivity, and 75% specificity on the CIT dataset, confirming its robustness across subject groups. In conclusion, FBC-EEGNet is effective for maximizing specificity, InceptionTime-light achieves higher accuracy, and the ensemble model delivers a balanced trade-off. The implications of this work are to advance reliable EEG-based deception detection and to set the stage for future research on explainable and interpretable models, validated on larger and more diverse datasets.
Optimized Multi-Resolution Attention-Based Architecture for Effective Diabetic Skin Lesion Classification Jaleesha, B. K.; Suganthi, Suganthi; Priyadharsini, N. K.; S., Yuvaraj; Pyingkodi, M.; Vallikkannu, M.
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 8 No 1 (2026): January
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v8i1.1320

Abstract

Early and reliable identification of diabetic skin complications, including ischemia and infection, is essential for timely clinical intervention and prevention of severe outcomes. Nevertheless, traditional deep learning models often exhibit limited generalization capability and high computational demands, particularly when distinguishing between visually subtle infection types. To overcome these challenges, this study introduces an end-to-end deep learning architecture termed the Enhanced Multi-Resolution Multi-Path Attention Network (EMRMP-Net), specifically designed for robust diabetic lesion classification. A key contribution of this work is the introduction of a trainable attention-based fusion mechanism that adaptively learns to weight and integrate multi-resolution feature maps, enhancing contextual understanding and discriminative performance. To address the prevalent issue of class imbalance in medical imaging datasets, EMRMP-Net utilizes focal loss and domain-tailored data augmentation, thereby promoting stable learning and improved representation of minority classes. Additionally, a shared classification head across multiple resolution pathways enables joint feature optimization, reducing computational redundancy and improving learning efficiency compared to traditional MRMP models. Comprehensive experiments on the publicly available Diabetic Foot Ulcer (DFU) dataset demonstrate that EMRMP-Net surpasses existing state-of-the-art-methods, achieving 98.12% accuracy and 98.14% F1-score for ischemia detection, and 95.27% accuracy with 93.68% F1-score for infection classification. Overall, EMRMP-Net provides a highly effective, computationally efficient, and generalizable framework for automated diabetic skin lesion analysis, demonstrating strong potential for real-world clinical applications. EMRMP-Net is designed as a general framework for diabetic skin lesion analysis, capable of handling diverse lesion characteristics through multi-resolution and attention-based feature learning. However, in this work, the model is explicitly formulated, trained, and evaluated for the clinically critical binary classification task of distinguishing ischemic ulcers from infected ulcers within DFU imagery.
Optimized Metaheuristic Integrated Neuro-Fuzzy Deep Learning Framework for EEG-Based Lie Detection Nagale, Tanmayi; Khandare, Anand
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 8 No 1 (2026): January
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v8i1.1340

Abstract

EEG-based deception detection remains challenging due to three critical limitations: high inter-subject variability, which restricts generalization, the black-box nature of deep learning models that undermines forensic interpretability, and substantial computational overhead arising from high-dimensional multi-channel EEG data. Although recent state-of-the-art approaches report accuracies of 82–88%, they fail to provide the transparency required for legal and forensic admissibility. To address these limitations, this study aims to develop an accurate, computationally efficient, and explainable EEG-based deception detection framework suitable for real-world forensic applications. The primary contribution of this work is a novel hybrid neuro-fuzzy architecture that jointly integrates intelligent channel selection, complementary deep feature learning, and transparent fuzzy reasoning, enabling high performance without sacrificing interpretability. The proposed framework follows a five-stage pipeline: (1) intelligent channel selection using Type-2 fuzzy inference with ANFIS-based ranking and multi-objective evolutionary optimization (MOEA/D), reducing EEG dimensionality from 64 to 14 channels (78.1% reduction); (2) dual-path deep learning that combines EEGNet for spatial–temporal feature extraction with InceptionTime-Light for multi-scale temporal representations; (3) a fuzzy attention mechanism to generate interpretable feature importance weights; (4) an ANFIS-based classifier employing Takagi–Sugeno fuzzy rules for transparent decision-making; and (5) triple-level interpretability through channel importance visualization, attention-weighted features, and extractable linguistic rules. The framework is evaluated on two benchmark datasets, such as LieWaves (27 subjects, 5-channel EEG) and the Concealed Information Test (CIT) dataset (79 subjects, 16-channel EEG). Experimental results demonstrate superior performance, achieving 93.8% accuracy on LieWaves and 92.7% on the CIT dataset, representing an improvement of 5.3 % points over the previous best-performing methods, while maintaining balanced sensitivity (92.4%) and specificity (95.2%). In conclusion, this work establishes that neuro-fuzzy integration can simultaneously achieve high classification accuracy, computational efficiency, and forensic-grade explainability, thereby advancing the practical deployment of EEG-based deception detection systems in real-world forensic applications.
HALF-MAFUNET: A Lightweight Architecture Based on Multi-Scale Adaptive Fusion for Medical Image Segmentation Maula Sandy, Abiaz Fazel; Prasetyo, Heri
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 8 No 1 (2026): January
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v8i1.1357

Abstract

Medical image segmentation is a critical component in computer-aided diagnosis systems but many deep learning models still require large numbers of parameters and heavy computation. Classical CNN-based architectures such as U-Net and its variants achieve good accuracy, but are often too heavy for real deployment. Meanwhile, modern Transformer-based or Mamba-based models capture long-range information but typically increase model complexity. Because of these limitations, there is still a need for a lightweight segmentation model that can provide a good balance between accuracy and efficiency across different types of medical images. This paper proposes Half-MAFUNet, a lightweight architecture based on multi-scale adaptive fusion and designed as a simplified version of MAFUNet. The main contribution of this work is combining the efficient encoder structure of Half-UNet with advanced fusion and attention mechanisms. Half-MAFUNet integrates Hierarchy Aware Mamba (HAM) for global feature modelling, Multi-Scale Adaptive Fusion (MAF) to combine global and local information, and two attention modules, Adaptive Channel Attention (ACA) and Adaptive Spatial Attention (ASA), to refine skip connections. In addition, this model incorporates Channel Atrous Spatial Pyramid Pooling (CASPP) to capture multi-scale receptive fields efficiently without increasing computational cost. Together, these components create a compact architecture that maintains strong representational power. The model is trained and evaluated on three public datasets: CVC-ClinicDB for colorectal polyp segmentation, BUSI for breast tumor segmentation, and ISIC-2018 for skin lesion segmentation. All images are resized to 256×256 pixels and processed using geometric and intensity-based augmentations. Half-MAFUNet achieves competitive performance, obtaining mean IoU around 84 85% and Dice/F1-Score around 90 92% across datasets, while using significantly fewer parameters and GFLOPs compared to U-Net, Att-UNet, UNeXt, MALUNet, LightM-UNet, VM-UNet, and UD-Mamba. These results show that Half-MAFUNet provides accurate and efficient medical image segmentation, making it suitable for real-world deployment on devices with limited computational resources.
Medical Image Segmentation Using a Global Context-Aware and Progressive Channel-Split Fusion U-Net with Integrated Attention Mechanisms Widhayaka, Alfath Roziq; Prasetyo, Heri
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 8 No 1 (2026): January
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v8i1.1371

Abstract

Medical image segmentation serves as a key component in Computer-Aided Diagnosis (CAD) systems across various imaging modalities. However, the task remains challenging because many images have low contrast and high lesion variability, and many clinical environments require efficient models. This study proposes CFCSE-Net, a U-Net-based model that builds upon X-UNet as a baseline for the CFGC and CSPF modules. This model incorporates a modified CFGC module with added Ghost Modules in the encoder, a CSPF module in the decoder, and Enhanced Parallel Attention (EPA) in the skip connections. The main contribution of this paper is the design of a lightweight architecture that combines multi-scale feature extraction with an attention mechanism to maintain low model complexity and increase segmentation accuracy. We train and evaluate CFCSE-Net on four public datasets: Kvasir-SEG, CVC-ClinicDB, BUSI (resized to 256 × 256 pixels), and PH2 (resized to 320 × 320 pixels), with data augmentation applied. We report segmentation performance as the mean ± standard deviation of IoU, DSC, and accuracy across three random seeds. CFCSE-Net achieves 79.78% ± 1.99 IoU, 87.21% ± 1.72 DSC, and 96.70% ± 0.59 accuracy on Kvasir-SEG, 88.11% ± 0.86 IoU, 93.42% ± 0.55 DSC, and 99.04% ± 0.09 accuracy on CVC-ClinicDB, 69.33% ± 2.66 IoU, 78.80% ± 2.65 DSC, and 96.30% ± 0.51 accuracy on BUSI, and 92.27% ± 0.52 IoU, 95.92% ± 0.30 DSC, and 98.06% ± 0.16 accuracy on PH2. Despite its strong performance, the model remains compact with 909,901 parameters and low computational cost, requiring 3.24 GFLOPs for 256 × 256 inputs and 5.07 GFLOPs for 320 × 320 inputs. These results show that CFCSE-Net maintains stable performance on polyp, breast ultrasound, and skin lesion segmentation while it stays compact enough for CAD systems on hardware with low computational resources.
Hybrid Swarm-Driven Vision Transformer (HSViT) for Lung Cancer Segmentation and Classification from CT Scans V, Kavithamani; Kavya, V.; Suganthi, R.; S., Yuvaraj; Monisha, P.; Arun Patrick
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 8 No 1 (2026): January
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v8i1.1384

Abstract

Lung cancer segmentation and classification from computed tomography (CT) images play a vital role in early diagnosis, prognosis assessment, and effective treatment planning. Despite significant progress in medical image analysis, accurate lung lesion analysis remains highly challenging due to overlapping anatomical structures, heterogeneous tissue intensity distributions, irregular and complex tumor shapes, and poorly defined lesion boundaries. These factors often limit the reliability and generalization capability of conventional deep learning models when applied to real-world clinical data. To address these challenges, this paper proposes a Hybrid Swarm-Driven Vision Transformer (HSViT) framework that synergistically combines swarm intelligence with transformer-based deep learning. The processing pipeline begins with Contrast Limited Adaptive Histogram Equalization (CLAHE), which enhances local contrast while suppressing noise amplification, thereby improving the visibility of subtle pulmonary nodules and lesion regions. Subsequently, a U-Net segmentation model optimized using the Coyote Optimization Algorithm (COA) is employed to accurately delineate lung lesions. COA, a swarm-based metaheuristic, adaptively fine-tunes U-Net parameters, enabling improved convergence and more precise boundary detection compared to gradient-based optimization alone. Following segmentation, discriminative lesion features are extracted and passed to the HSViT classifier. The proposed classifier integrates a Dual-Stage Attention Fusion (DSAF) mechanism, which effectively captures both fine-grained local spatial features and long-range global contextual dependencies. The framework achieves a Dice Coefficient of 0.95, an overall classification accuracy of 98.7%, and a minimized training loss of 0.04. These results highlight the strong potential of HSViT for reliable automated lung cancer diagnosis and for supporting clinical decision-making systems in real-world healthcare environments.
Optimizing Input Window Length and Feature Requirements for Machine Learning-Based Postprandial Hyperglycemia Prediction Maulana, Muhammad Rafly Alfarizqy; Indriani, Fatma; Abadi, Friska; Kartini, Dwi; Mazdadi, Muhammad Itqan
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 8 No 1 (2026): January
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v8i1.1401

Abstract

Continuous glucose monitoring systems currently generate alerts only after blood glucose thresholds are breached, limiting their utility for proactive diabetes management. Predicting postprandial glucose excursions before they occur requires determining the optimal amount of historical data and identifying which features contribute most to prediction accuracy. This study systematically evaluates how the length of the pre-meal observation window and feature composition affect machine-learning predictions of hyperglycemia events 60 minutes after eating. We analyzed 1,642 meal events from 45 adults wearing continuous glucose sensors, constructing features from pre-meal glucose trajectories, meal macronutrients, time of day, and health status. Four observation windows (15, 30, 45, 60 minutes) and three feature sets (all features, glucose-only, meal-only) were evaluated using Random Forest, XGBoost, and CatBoost with 5-fold group cross-validation. CatBoost with a 30-minute window achieved the best performance: 72.6% F1-macro, 79.6% accuracy, and 64.0% recall for hyperglycemia detection. Extending windows beyond 30 minutes did not yield consistent benefits, whereas 15-minute windows yielded comparable results. Glucose trajectory features alone retained 94% of full model performance (68.5% F1-macro), whereas meal composition alone proved insufficient (59.4% F1-macro). These findings demonstrate that recent glucose history dominates short-term prediction, enabling practical real-time systems with minimal data requirements. A 30-minute observation window with glucose and meal features offers an effective balance between prediction accuracy and system responsiveness.

Page 2 of 3 | Total Record : 25