cover
Contact Name
Triwiyanto
Contact Email
triwiyanto123@gmail.com
Phone
+628155126883
Journal Mail Official
editorial.jeeemi@gmail.com
Editorial Address
Department of Electromedical Engineering, Poltekkes Kemenkes Surabaya Jl. Pucang Jajar Timur No. 10, Surabaya, Indonesia
Location
Kota surabaya,
Jawa timur
INDONESIA
Journal of Electronics, Electromedical Engineering, and Medical Informatics
ISSN : -     EISSN : 26568632     DOI : https://doi.org/10.35882/jeeemi
The Journal of Electronics, Electromedical Engineering, and Medical Informatics (JEEEMI) is a peer-reviewed open-access journal. The journal invites scientists and engineers throughout the world to exchange and disseminate theoretical and practice-oriented topics which covers three (3) majors areas of research that includes 1) Electronics, 2) Biomedical Engineering, and 3)Medical Informatics (emphasize on hardware and software design). Submitted papers must be written in English for an initial review stage by editors and further review process by a minimum of two reviewers.
Articles 24 Documents
Search results for , issue "Vol 7 No 4 (2025): October" : 24 Documents clear
Heart Disease Classification Using Random Forest and Fox Algorithm as Hyperparameter Tuning Masbakhah, Afidatul; Sa'adah, Umu; Muslikh, Mohamad
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 4 (2025): October
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i4.932

Abstract

Heart disease remains the leading cause of death worldwide, making early and accurate diagnosis crucial for reducing mortality and improving patient outcomes. Traditional diagnostic approaches often suffer from subjectivity, delay, and high costs. Therefore, an effective and automated classification system is necessary to assist medical professionals in making more accurate and timely decisions. This study aims to develop a heart disease classification model using Random Forest, optimized through the FOX algorithm for hyperparameter tuning, to improve predictive performance and reliability. The main contribution of this research lies in the integration of the FOX metaheuristic optimization algorithm with the RF classifier. FOX, inspired by fox hunting behavior, balances exploration and exploitation in searching for the optimal hyperparameters. The proposed RF-FOX model is evaluated on the UCI Heart Disease dataset consisting of 303 instances and 13 features. Several preprocessing steps were conducted, including label encoding, outlier removal, missing value imputation, normalization, and class balancing using SMOTE-NC. FOX was used to optimize six RF hyperparameters across a defined search space. The experimental results demonstrate that the RF-FOX model achieved superior performance compared to standard RF and other hybrid optimization methods. With a training accuracy of 100% and testing accuracy of 97.83%, the model also attained precision (97.83%), recall (97.88%), and F1-score (97.89%). It significantly outperformed RF-GS, RF-RS, RF-PSO, RF-BA, and RF-FA models in all evaluation metrics. In conclusion, the RF-FOX model proves highly effective for heart disease classification, providing enhanced accuracy, reduced misclassification, and clinical applicability. This approach not only optimizes classifier performance but also supports medical decision-making with interpretable and reliable outcomes. Future work may involve validating the model on more diverse datasets to further ensure its generalizability and robustness.
Hybrid CNN–ViT Model for Breast Cancer Classification in Mammograms: A Three-Phase Deep Learning Framework Saini, Vandana; Khurana, Meenu; Challa, Rama Krishna
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 4 (2025): October
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i4.920

Abstract

Breast cancer is one of the leading causes of death among women worldwide. Early and accurate detection plays a vital role in improving survival rates and guiding effective treatment. In this study, we propose a deep learning-based model for automatic breast cancer detection using mammogram images. The model is divided into three phases: preprocessing, segmentation, and classification. The first two phases, image enhancement and segmentation, were developed and validated in our previous works. Both phases were designed in a robust manner using learning networks; the usage of VGG-16 in preprocessing and U-net in segmentation helps in enhancing the overall classification performance. In this paper, we focus on the classification phase and introduce a novel hybrid deep learning based model that combines the strengths of Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs). This model captures both fine-grained image details and the broader global context, making it highly effective for distinguishing between benign and malignant breast tumors. We also include attention-based feature fusion and Grad CAM visualizations to make predictions more explainable for clinical use and reference. The model was tested on multiple benchmark datasets, DDSM, INbreast, and MIAS, and a combination of all three datasets, and achieved excellent results, including 100% accuracy on MIAS and over 99% accuracy on other datasets. Compared to recent deep learning models, our method outperforms existing approaches in both accuracy and reliability. This research offers a promising step toward supporting radiologists with intelligent tools that can improve the speed and accuracy of breast cancer diagnosis.
Optimizing Medical Logistics Networks: A Hybrid Bat-ALNS Approach for Multi-Depot VRPTW and Simultaneous Pickup-Delivery Taha, Anass; Elatar, Said; El Bazzi Mohamed, Salim; Ait Ider, Abdelouahed; Najdi, Lotfi
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 4 (2025): October
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i4.1054

Abstract

This paper tackles the multi-depot heterogeneous-fleet vehicle-routing problem with time windows and simultaneous pickup and delivery (MDHF-VRPTW-SPD), a variant that mirrors he growing complexity of modern healthcare logistics. The primary purpose of this study is to model this complex routing problem as a mixed-integer linear program and to develop and validate a novel hybrid metaheuristic, B-ALNS, capable of delivering robust, high-quality solutions. The proposed B-ALNS combines a discrete Bat Algorithm with Adaptive Large Neighborhood Search, where the bat component supplies frequency-guided diversification, while ALNS adaptively selects destroy and repair operators and exploits elite memory for focused intensification. Extensive experiments were conducted on twenty new benchmark instances (ranging from 48 to 288 customers), derived from Cordeau’s data and enriched with pickups and a four-class fleet. Results show that B-ALNS attains a mean cost 1.15 % lower than a standalone discrete BA and 2.78 % lower than a simple LNS, achieving the best average cost on 17/20 instances and the global best solution in 85% of test instances. Statistical tests further confirm the superiority of the hybrid B-ALNS, a Friedman test and Wilcoxon signed-rank comparisons give p-value of 0.0013 versus BA and p-value of 0.0002 versus LNS, respectively. Although B-ALNS trades speed for quality (182.65 seconds average runtime versus 54.04 seconds for BA and 11.61 seconds for LNS), it produces markedly more robust solutions, with the lowest cost standard deviation and consistently balanced routes. These results demonstrate that the hybrid B-ALNS delivers statistically significant, high-quality solutions within tactical planning times, offering a practical decision-support tool for secure, cold-chain-compliant healthcare logistics
BRU-SOAT: Brain Tissue Segmentation via Deep Learning based Sailfish Optimization and Dual Attention Segnet Athur Shaik Ali Gousia Banu; Hazra, Sumit
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 4 (2025): October
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i4.795

Abstract

Automated segmentation of brain tissue into gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) from magnetic resonance imaging (MRI) plays a crucial role in diagnosing neurological disorders such as Alzheimer’s disease, epilepsy, and multiple sclerosis. A key challenge in brain tissue segmentation (BTS) is accurately distinguishing boundaries between GM, WM, and CSF due to intensity overlaps and noise in the MRI image. To overcome these challenges, we propose a novel deep learning-based BRU-SOAT model for BTS using the BrainWeb dataset. Initially, brain MRI images are fed into skull stripping to remove skull regions, followed by preprocessing with a Contrast Stretching Adaptive Wiener (CSAW) filter to improve image quality and reduce noise. The pre-processed images are fed into ResEfficientNet for fine feature extraction. After extracting the features, the Sailfish Optimization (SFO) is employed to select the most related features while eliminating irrelevant features. A Dual Attention SegNet (DAS-Net) segments GM, CSF, and WM with high precision. The proposed BRU-SOAT model is assessed based on its precision, F1 score, specificity, recall, accuracy, Jaccard Index, and Dice Index. The proposed BRU-SOAT model achieved a segmentation accuracy of 99.17% for brain tissue segmentation. Moreover, the proposed DAS-Net outperformed fuzzy c-means clustering, fuzzy consensus clustering, and U-Net methods, achieving 98.50% (CSF), 98.63% (GM), and 99.15% (WM), indicating improved segmentation accuracy. In conclusion, the BRU-SOAT model provides a robust and highly accurate framework for automated brain tissue segmentation, supporting improved clinical diagnosis and neuroimaging analysis
Unified Deep Architectures for Real-Time Object Detection and Semantic Reasoning in Autonomous Vehicles Aher, Vishal; Jondhale, Satish; Agarkar, Balasaheb; Chaudhari, Sachin
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 4 (2025): October
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i4.813

Abstract

The development of autonomous vehicles (AVs) has revolutionized the transportation industry, promising to boost mobility, lessen traffic, and increase safety on roads. However, the complexity of the driving environment and the requirement for real-time processing of vast amounts of sensor data present serious difficulties for AV systems. Various computer vision approaches, such as object detection, lane detection, and traffic sign recognition, have been investigated by researchers in order to overcome these issues. This research presents an integrated approach to autonomous vehicle perception, combining real-time object detection, semantic segmentation, and classification within a unified deep learning architecture. Our approach leverages the strengths of existing frameworks, including MultiNet’s real-time semantic reasoning capabilities, the fast-encoding methods of PointPillars to identify objects from point clouds, as well as the reliable one-stage monocular 3D object detection system. The offered model tries to improve computational efficiency and accuracy by utilizing a shared encoder and task-specific decoders that perform classification, detection, and segmentation concurrently. The architecture is evaluated against challenging datasets, illustrating outstanding achievements in terms of speed and accuracy, suitable for real-time applications in autonomous driving. This integration promises significant advancements in the perception systems of autonomous vehicles a providing in-depth knowledge of the vehicle’s environment through efficient concepts of deep learning techniques. In our model, we used Yolov8, MultiNet, and during training got accuracy 93.5%, precision 92.7 %, recall 82.1% and mAP 72.9%.
MCRNET-RS: Multi-Class Retinal Disease Classification using Deep Learning-based Residual Network-Rescaled N, Mohana Suganthi; M, Arun
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 4 (2025): October
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i4.925

Abstract

Retinal diseases are a major cause of vision impairment, leading to partial or complete blindness if undiagnosed. Early detection and accurate classification of these conditions are crucial for effective treatment and vision preservation. However, Conventional diagnostic techniques are time-consuming and require professional assistance. Additionally, existing deep-learning models struggle with feature extraction and classification accuracy because of differences in image quality and disease severity. To overcome these challenges, a novel deep learning (DL)-based MCRNET-RS approach is proposed for multi-class retinal disease classification using fundus images. The gathered fundus images are pre-processed using the Savitzky-Golay Filter (SGF) to enhance and preserve essential structural details. The DL-based Residual Network-Rescaled (ResNet-RS) is used to extract hierarchical feature extraction for accurate retinal disease classification. Multi-layer perceptron (MLP) is used to classify retinal diseases such as Diabetic Neuropathy (DN), Branch Retinal Vein Occlusion (BRVO), Diabetic Retinopathy (DR), Healthy, Macular Hole (MH), Myopia (MYA), Optic Disc Cupping (ODC), Age-Related Macular Degeneration (ARMD), Optic Disc Pit (ODP), and Tilted Superior Lateral Nerve (TSLN). The effectiveness of the proposed MCRNET-RS method was assessed using precision, recall, specificity, F1 score, and accuracy. The proposed MCRNET-RS approach achieves an overall accuracy of 98.17%, F1 score of 95.99% for Retinal disease classification. The proposed approach improved the total accuracy by 3.27%, 4.48%, and 4.28% compared to EyeDeep-Net, Two I/P VGG16, and IDL-MRDD, respectively. These results confirm that the proposed MCRNET-RS framework provides a strong, scalable, and highly accurate solution for automated retinal disease classification, thereby supporting early diagnosis and effective clinical decision-making.
MedProtect: Protecting Electronic Patient Data Using Interpolation-Based Medical Image Steganography Muhammad, Aditya Rizki; Ramadhan, Irsyad Fikriansyah; Croix, Ntivuguruzwa Jean De La; Ahmad, Tohari; Uwizeye, Dieudonne; Kantarama, Evelyne
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 4 (2025): October
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i4.977

Abstract

Electronic Patient Records (EPRS) represent critical elements of digital healthcare systems, as they contain confidential and sensitive medical information essential for patient care and clinical decision-making. Due to their sensitive nature, EPRs frequently face threats from unauthorized intrusions, security breaches and malicious attacks. Safeguarding such information has emerged as an urgent concern in medical data security. Steganography offers a compelling solution by hiding confidential data within conventional carrier objects like medical imagery. Unlike traditional cryptographic methods that merely alter the data representation, steganography conceals the existence of the information itself, thereby providing discretion, security, and resilience against unauthorized disclosure. However, embedding patient information inside medical images introduces a new challenge. The method must maintain the image's visual fidelity to prevent compromising diagnostic precision, while ensuring reversibility for complete restoration of both original imagery and concealed information. To address these challenges, this research proposes MedProtect, a reversible steganographic framework customized for medical applications. MedProtect procedure integrates pixel interpolation techniques and center-folding-based data transformation to insert sensitive records into medical imagery. This method combination ensures accurate data recovery of the original image while maintaining the image quality of the resulting image. To clarify the performance of MedProtect, this study evaluates two well-established image quality metrics, Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM). The discovery shows that the framework achieves PSNR values of 48.190 to 53.808 dB and SSIM scores between 0.9956 and 0.9980. These outcomes display the high level of visual fidelity and imperceptibility achieved by the proposed method, underscoring its effectiveness as a secure approach for protecting electronic patient records within medical imaging systems.
H20 and H20 with NaOH-Based Multispectral Classification Using Image Segmentation and Ensemble Learning EfficientNetV2, Resnet50, MobileNetV3 Melinda, Melinda; Yunidar, Yunidar; Zulhelmi, Zulhelmi; Suyanda, Arya; Qadri Zakaria, Lailatul; Wong, W.K
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 4 (2025): October
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i4.1016

Abstract

High Multispectral imaging has become a promising approach in liquid classification, particularly in distinguishing visually similar but subtly spectrally distinct solutions, such as pure water (H₂O) and water mixed with sodium hydroxide (H₂O with NaOH). This study proposed a classification system based on image segmentation and deep learning, utilizing three leading Convolutional Neural Network (CNN) architectures: ResNet 50, EfficientNetV2, and MobileNetV3. Before classification, each multispectral image was processed through color segmentation in HSV space to highlight the dominant spectral, especially in the hue range of 110 170. The model was trained using a data augmentation scheme and optimized with the Adam algorithm, a batch size of 32, and a sigmoid activation function. The dataset consists of 807 images, including 295 H₂O images and 512 H₂O with NaOH images, which were divided into training (64%), validation (16%), and testing (20%) data. Experimental results show that ResNet50 achieves the highest performance, with an accuracy of 93.83% and an F1 score of 93.67%, particularly in identifying alkaline pollution. EfficientNetV2 achieved the lowest loss (0.2001) and exhibited balanced performance across classes, while MobileNetV3, despite being a lightweight model, remained competitive with a recall of 0.97 in the H₂O with NaOH class. Further evaluation with Grad CAM reveals that all models focus on the most critical spectral areas of the segmentation results. These findings support the effectiveness of combining color-based segmentation and CNN in the spectral classification of liquids. This research is expected to serve as a stepping stone in the development of an efficient and accurate automatic liquid classification system for both laboratory and industrial applications.
Gallbladder Disease Classification from Ultrasound Images Using CNN Feature Extraction and Machine Learning Optimization Adhitama Putra, Ryan; Angga Pradipta, Gede; Desiana Wulaning Ayu, Putu
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 4 (2025): October
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i4.1030

Abstract

Gallbladder diseases, including gallstones, carcinoma, and adenomyomatosis, may cause severe complications if not identified correctly and in a timely manner. However, ultrasound image interpretation relies heavily on operator experience and may suffer from subjectivity and inconsistency. This study aims to develop an automated and optimized classification model for gallbladder disease using ultrasound images, aiming to improve diagnostic reliability and efficiency. A key outcome of this research is a thorough assessment of how feature selection combined with hyperparameter tuning influences the accuracy of classical machine learning models that use features extracted via CNN-based feature extraction. The proposed pipeline enhances diagnostic accuracy while remaining computationally efficient. The method involves extracting deep features from ultrasound images using a pre-trained VGG16 CNN model. The features are subsequently reduced using the SelectKBest method through Univariate Feature Selection. Multiple popular classification models, specifically SVM, Random Forest, KNN, and Logistic Regression were tested using both original settings and adjusted hyperparameters through grid search. A complete evaluation of model performance was conducted using the test set, employing key performance indicators including overall prediction correctness (accuracy), actual positive rate (recall), positive prediction accuracy (precision), F1-score, and the ROC curve’s corresponding area value. Evaluation results suggest that the SVM approach, combined with selected features and hyperparameter tuning, achieved the highest performance: 99.35% accuracy, 99.32% precision, 99.35% recall, and 99.33% F1-score, with a relatively short computation time of 18.4 seconds. In conclusion, feature selection and hyperparameter tuning significantly enhance classification performance, making the proposed method a promising candidate for clinical decision support in gallbladder disease diagnosis using ultrasound imaging.
COV-TViT: An Improved Diagnostic System for COVID Pneumonitis Utilizing Transfer Learning and Vision Transformer on X-Ray Images Kumar, Sunil; Yadav, Amar Pal; Nandal, Neha; Awasthi, Vishal; Sapra, Luxmi; Chhabra, Prachi
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 4 (2025): October
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i4.1037

Abstract

COVID is a contagious lung ailment that continues to be a world curse, and it remains a highly infectious respiratory disease with global health implications. Traditional diagnostic methods, such as RT-PCR, though widely used, are often constrained by high costs, limited accessibility, and delayed results. In contrast, radiology for lung disease detection has been proven advantageous for identifying deformities, and chest X-rays are the most preferred radiological method due to their non-invasive nature. To address these limitations, this study aims to develop an efficient, automated diagnostic system leveraging radiological imaging, specifically X-rays, which are cost-effective and widely available. The primary contribution of this research is the introduction of COV-TViT, a novel deep learning framework that integrates transfer learning with Vision Transformer (ViT) architecture for the accurate detection of COVID pneumonitis. The proposed method is evaluated using the COVID-QU-Ex dataset, which comprises a balanced set of X-ray images from COVID positive and healthy individuals. Methodologically, the system employs pre-trained convolutional neural networks (CNNs), specifically VGG16 and VGG19 (Visual Geometry Group), for transfer learning, followed by fine tuning to enhance feature extraction. The ViT model, known for its self-attention mechanism, is then applied to capture complex spatial dependencies in the X-ray images, enabling robust classification. Experimental results demonstrate that COV-TViT achieves a classification accuracy of 98.96% and an F1 score of 96.21%, outperforming traditional CNN based transfer learning models in several scenarios. These findings underscore the model’s potential for high-precision COVID pneumonitis detection. The proposed approach significantly transforms classification tasks using self-attention mechanisms to extract features and learn representations. Overall, the proposed diagnostic system COV-TViT can be advantageous in the fundamental identification of COVID pneumonitis.

Page 1 of 3 | Total Record : 24