cover
Contact Name
Triwiyanto
Contact Email
triwiyanto123@gmail.com
Phone
+628155126883
Journal Mail Official
editorial.jeeemi@gmail.com
Editorial Address
Department of Electromedical Engineering, Poltekkes Kemenkes Surabaya Jl. Pucang Jajar Timur No. 10, Surabaya, Indonesia
Location
Kota surabaya,
Jawa timur
INDONESIA
Journal of Electronics, Electromedical Engineering, and Medical Informatics
ISSN : -     EISSN : 26568632     DOI : https://doi.org/10.35882/jeeemi
The Journal of Electronics, Electromedical Engineering, and Medical Informatics (JEEEMI) is a peer-reviewed open-access journal. The journal invites scientists and engineers throughout the world to exchange and disseminate theoretical and practice-oriented topics which covers three (3) majors areas of research that includes 1) Electronics, 2) Biomedical Engineering, and 3)Medical Informatics (emphasize on hardware and software design). Submitted papers must be written in English for an initial review stage by editors and further review process by a minimum of two reviewers.
Articles 287 Documents
Optimized EEG-Based Depression Detection and Severity Staging Using GAN-Augmented Neuro-Fuzzy and Deep Learning Models Dhekane, Sudhir; Khandare, Anand
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 4 (2025): October
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i4.1107

Abstract

Detecting depression and identifying its severity remain challenging tasks, especially in diverse environments where fair and reliable outcomes are expected. This study aims to address this problem with advanced machine learning models to achieve high accuracy and explainability; making the approach suitable for the real world depression screening and stage evaluation by implementing EEG-based depression detection and staging. We established the parameters of development of EEG-based depression detection in optimization of channel selection together with machine-learning models. Extreme channel selection was performed during this study with Recursive Feature Elimination (RFE) whereby major 11 channels identified, and the MLP classifier achieved 98.7% accuracy supported by AI explainability, thus outpacing the XGBoost and LGBM by 5.2 to 8.2% across multiple datasets (n=184 to 382) and greatly endorsed incredible generalization (precision=1.000, recall=0.966). This makes MLP a trustworthy BCI tool for real-world implementation of depression screening. We also examined assigning depression stages (Mild/Moderate/Severe) on EEG data with models supported or not with GAN-based augmentation (198 to 5,000 samples). CNNs did well on Moderate-stage classification, while ANFIS kept a firm accuracy of 98.34% at perfect metric consistency (precision/recall=0.98) with AI explainability. GAN augmentation improved the classifications of severe cases by 15%, indicating a good marriage of neuro-fuzzy systems and synthetic data for the precise stage determination. This is an important contribution to BCI research since it offers a data-efficient and scalable framework for EEG based depression diagnosis and severity evaluation, thus contributing to the bridge between competitive modeling and clinical applicability. This work, therefore, lays down a pathway for the design of accessible and automated depression screening aids in both high-resource and low-resource settings
Secure Image Transmission using Quantum-Resilient and Gate Network for Latent-Key Generation Gangappa, Malige; Satyanarayana, Balla V V; A, Dheeraj
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 4 (2025): October
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i4.1156

Abstract

Recently, deep learning-based techniques have undergone rapid development, yielding promising results in various fields. For making more complex operations in day-to-day tasks, the arbitrary resolution of JPEG image data security requires more than just deep learning in this modern era. To overcome this, our research introduces a pioneering synergistic framework for a quantum-resistant deep learning technique, which is expected to provide next-generation robust security in the dynamic resolution of multi-JPEG-image-based joint compression-encryption. Our proposed framework features dual-parallel processing of a dynamic gate network, utilizing a convolutional neural network for specialization detailing and quantum-inspired transformations. These transformations leverage Riemann zeta functions for depth feature extraction, integrated with a chaotic sequence and dynamic iterations to generate a latent-fused chaotic key for image joint compression and encryption. Further, the authenticity of an encrypted image that is bound by a secure pattern derived from a random transform variance anchors cryptographic operations. Then, bound data transmitted through a Synergic Curve Key Exchange Engine fused with renowned Chen attractors to generate non-invertible keys for transmission. Finally, experimental results of the image reconstruction quality measured by the structural similarity index metric were 98.82 1.12. Security validation incorporates different metrics by addressing the entropy analysis to quantify resistance against differential and statistical attacks, with a yield of 7.9980 0.0015. In conclusion, the whole implementation uniquely combines latent-fused chaotic with improved key space analysis for discrete cosine transform quantization with authenticated encryption, establishing an adversarial-resistant pipeline that simultaneously compresses data and validates integrity through pattern-bound authentication
A PSO-SVM-Based Approach for Classifying ECG and EEG Bio signals in Seizure Detection Zougagh, Lahcen; Bouyghf, Hamid; Nahid, Mohammed
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 4 (2025): October
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i4.1159

Abstract

Early identification of epileptic activities is essential for clinical analysis and preventing advancement of the disease. Despite the development of neurological diagnostic techniques, the current analysis of epileptic seizures is still relying on a visual interpretation of electroencephalogram (EEG) signal. Neurology specialists manually perform this examination to detect patterns, a process that is both challenging and time-consuming. Biomedical signals, such as EEG and electrocardiogram (ECG), are important tools for studying human brain disorders, particularly epilepsy. This paper aims to develop a system that automatically detects epileptic seizures using discrete wavelet decomposition (DWT), particle swarm optimization (PSO), and support vector machine (SVM), thereby relieving clinicians of their challenging tasks. The proposed system employs the DWT method, PSO, and SVM. This approach has three steps. First, we introduce a method that uses a four-level discrete wavelet transform (DWT) to extract important information from electroencephalogram and electrocardiogram signals by breaking them down into useful features. Second, we optimize the SVM classifier parameters using the PSO algorithm. Finally, we classify the extracted parameters using the optimized SVM. The system achieves an average accuracy of 97.92%, a 100% recall, a 96.15% specificity, and a 0.96 AUC value. Our findings demonstrate the success of this method, showing that the PSO-optimized SVM performs significantly better in classification. In addition, our findings also demonstrate the importance of using ECG signals as supplemental data. One implication of our work is the potential for creating wearable, real-time, customized seizure warning systems. In the future, these systems will be deployed on embedded platforms in real time and validated using larger datasets.
Automatic Target Recognition using Unmanned Aerial Vehicle Images with Proposed YOLOv8-SR and Enhanced Deep Super-Resolution Network Mishra, Gangeshwar; Tanwar, Rohit; Gupta, Prinima
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 4 (2025): October
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i4.888

Abstract

Modern surveillance necessitates the use of automatic target recognition (ATR) to identify targets or objects quickly and accurately for multiclass classification in unmanned aerial vehicles (UAVs) such as pedestrians, people, bicycles, cars, vans, trucks, tricycles, buses, and motors. The inadequate recognition rate in target detection for UAVs could be due to the fundamental issues provided by the poor resolution of photos recorded from the distinct perspective of the UAVs. The VisDrone dataset used for image analysis consists of a total of 10,209 UAV photos. This research work presents a comprehensive framework specifically for multiclass target classification using VisDrone UAV imagery. The YOLOv8-SR, which stands for "You Only Looked Once Version 8 with Super-Resolution," is a developed model that builds on the YOLOv8s model with the Enhanced Deep Super-Resolution Network (EDSR). The YOLOv8-SR uses the EDSR to convert the low-resolution image to a high-resolution image, allowing it to estimate pixel values for better processing better. The high-resolution image was generated by the EDSR model, having a Peak Signal-to-Noise Ratio (PSNR) of 25.32 and a Structural Similarity Index (SSIM) of 0.781. The YOLOv8-SR model's precision is 63.44%, recall is 46.64%, F1-score is 52.69%, mean average precision (mAP@50) is 51.58%, and the mAP@50–95 is 50.67% over the range of confidence thresholds. The investigation fundamentally transforms the precision and effectiveness of ATR, indicating a future in which ingenuity overcomes obstacles that were once considered insurmountable. This development is characterized by the use of an improved deep super-resolution network to produce super-resolution images from low-resolution inputs. The YoLov8-SR model, a sophisticated version of the YoLov8s framework, is key to this breakthrough. By amalgamating the EDSR methodology with the advanced YOLOv8-SR framework, the system generates high-resolution images abundant in detail, markedly exceeding the informational quality of their low-resolution versions.
DR-FEDPAM: Detection of Diabetic Retinopathy using Federated Proximal Averaging Model P, Gaya Nair; B, Lanitha
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 4 (2025): October
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i4.915

Abstract

Diabetic retinopathy (DR) is an eye condition caused by damage to the blood vessels of the retina due to high blood sugar levels, commonly associated with diabetes. Without proper treatment, it can lead to visual impairment or blindness. Traditional machine learning (ML) approaches for detecting Diabetic retinopathy rely on centralized data aggregation, which raises significant privacy concerns and often encounters regulatory challenges. To address these issues, the DR-FEDPAM model is proposed for the detection of diabetic retinopathy. Initially, the images are preprocessed using a Median Filter (MeF) and Gaussian Star Filter (GaSF) to reduce noise and enhance image quality. The preprocessed images are then input into a federated proximal model. Federated Learning (FL) enables multiple local models to train on distributed devices without sharing raw data. After the local models process the data, their parameters are aggregated through a Global Federated Averaging (GFA) model. This global model combines the parameters from all local models to produce a unified model that classifies each image as either normal or diabetic retinopathy. The model’s performance is evaluated using precision (PR), F1-score (F1), specificity (SP), recall (RE), and accuracy (AC). The DR-FEDPAM achieves a balanced trade-off with 7.8 million parameters, 1.7 FLOPs, and an average inference time of 13.9 ms. The model improves overall accuracy by 5.44%, 1.89%, and 4.43% compared to AlexNet, ResNet, and APSO, respectively. Experimental results show that the proposed method achieves an accuracy of 98.36% in detecting DR
Multi-Modal Graph-Aware Transformer with Contrastive Fusion for Brain Tumor Segmentation Chowdhury, Rini; Kumar, Prashant; Suganthi, R.; Ammu, V.; Evance Leethial, R.; Roopa, C.
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 4 (2025): October
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i4.993

Abstract

Accurate segmentation of brain tumors in MRI images is critical for early diagnosis, surgical planning, and effective treatment strategies. Traditional deep learning models such as U-Net, Attention U-Net, and Swin-U-Net have demonstrated commendable success in tumor segmentation by leveraging Convolutional Neural Networks (CNNs) and transformer-based encoders. However, these models often fall short in effectively capturing complex inter-modality interactions and long-range spatial dependencies, particularly in tumor regions with diffuse or poorly defined boundaries. Additionally, they suffer from limited generalization capabilities and demand substantial computational resources. AIM: To overcome these limitations, a novel approach named Graph-Aware Transformer with Contrastive Fusion (GAT-CF) is introduced. This model enhances segmentation performance by integrating spatial attention mechanisms of transformers with graph-based relational reasoning across multiple MRI modalities, namely T1, T2, FLAIR, and T1CE. The graph-aware structure models inter-slice and intra-slice relationships more effectively, promoting better structural understanding of tumor regions. Furthermore, a multi-modal contrastive learning strategy is employed to align semantic features and distinguish complementary modality-specific information, thereby improving the model’s discriminative power. The fusion of these techniques facilitates improved contextual understanding and more accurate boundary delineation in complex tumor regions. When evaluated on the BraTS2021 dataset, the proposed GAT-CF model achieved a Dice score of 99.1% and an IoU of 98.4%, surpassing the performance of state-of-the-art architectures like Swin-UNet and SegResNet. It also demonstrated superior accuracy in detecting and enhancing tumor voxels and core tumor regions, highlighting its robustness, precision, and potential for clinical adoption in neuroimaging applications
A Mattress-Integrated ECG System for Home Detection of Obstructive Sleep Apnea Through HRV Analysis Using Wavelet Transform and XGBoost Classification Fitrieyatul Hikmah, Nada; Setiawan, Rachmad; Amalia, Rima; Syulthoni, Zain Budi; Nugroho, Dwi Oktavianto Wahyu; Syakir, Mu’afa Ali
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 4 (2025): October
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i4.1022

Abstract

Obstructive Sleep Apnea (OSA) is a potentially life-threatening sleep disorder that often remains undiagnosed due to the complexity of conventional diagnostic methods such as polysomnography (PSG). Currently, there is a lack of accessible, non-invasive diagnostic solutions suitable for home use. This study proposes a novel approach to automate OSA detection using single-lead electrocardiogram (ECG) signals acquired through non-contact conductive fabric electrodes embedded in a mattress, enabling unobtrusive monitoring during sleep. The main contributions of the proposed study are a mattress-embedded contactless ECG monitoring system eliminating the discomfort of traditional electrodes, and an advanced signal processing framework integrating wavelet decomposition with machine learning for precise OSA identification. ECG signals from 35 subjects (30 male, 5 females, aged 27-63 years) diagnosed with OSA were obtained from the PhysioNet Apnea-ECG database, originally sampled at 100 Hz and up-sampled to 250 Hz for consistency with experimental recordings from healthy volunteers tested in various sleep positions. Signals were recorded non-invasively during sleep in various body positions and processed using the Discrete Wavelet Transform (DWT) up to the third level of decomposition. The processing of ECG signals involved Heart Rate Variability (HRV) analysis, which was applied to extract information in the time domain, frequency domain, and non-linear properties. By analyzing HRV on the respiratory sinus arrhythmia spectrum, the respiration signal was obtained from ECG-derived respiration (EDR). Feature selection was performed using ANOVA, resulting in a set of key features including respiratory rate, SD2, SDNN, LF/HF ratio, and pNN50. These features were classified using the XGBoost algorithm to determine the presence of OSA. The proposed system achieved a detection accuracy of 96.7%, demonstrating its potential for reliable home-based OSA diagnosis. This method improves comfort through non-contact sensing and supports early intervention by delivering timely alerts for high-risk patients
Energy Conservation Clustering through Agent Nodes and Clusters (EECANC) for Wearable Health Monitoring and Smart Building Automation in Smart Hospitals using Wireless Sensor Networks Mirkar, Sulalah Qais; Shinde, Shilpa
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 4 (2025): October
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i4.1082

Abstract

Wireless Sensor Networks (WSNs) play a vital role in enabling real-time patient monitoring, medical device tracking, and automated management of building operations in smart hospitals. Wearable health sensors and hospital automation systems produce a constant flow of data, resulting in elevated energy usage and network congestion. This study introduces an advanced framework named Energy Conservation via Clustering by Agent Nodes and Clusters (EECANC), designed to improve energy efficiency, extend the network's longevity, and facilitate smart building automation in hospitals. The EECANC protocol amalgamates wearable medical monitoring (oxygen saturation, body temperature, heart rate, and motion tracking) with intelligent hospital building automation (HVAC regulation, lighting management, and security surveillance) through a hierarchical Wireless Sensor Network-based clustering system. By reducing routing and data redundancy, cluster heads (CHs) and agent nodes (ANs) reduce redundant transmissions and extend the life of sensor batteries. EECANC limits direct interaction with the hospital's Smart Building Management System, thereby reducing emergency response times and improving energy efficiency throughout the hospital. The efficiency of EECANC was proven by comparing its performance with other existing clustering protocols, including EECAS, ECRRS, EA-DB-CRP, and IEE-LEACH. The protocol achieved a successful packet delivery rate of 83.33% to the base station, exceeding the performance of EECAS (83.33%), ECRRS (48.45%), EA-DB-CRP (54.37%), and IEE-LEACH (59.13%). The system demonstrated better energy utilization, resulting in a longer network longevity and lower transmission costs especially during high-traffic medical events. It is clear from the first and last node death rates that EECANC is the most energy-efficient protocol, significantly better than the other methods available. The EECANC model supports hospital automation, enhances patient safety, and promotes sustainability, providing a cost-effective and energy-efficient solution for future smart healthcare facilities
Comparison of Deep Learning Methods for Sleep Apnea Detection Using Spectrogram-Transformed ECG Signals Hadiyoso, Sugondo; Wijayanto, Inung; Sekar Safitri, Ayu; Dewi Rahmaniar, Thalita; Rizal, Achmad; Lata Tripathi, Suman
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 4 (2025): October
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i4.967

Abstract

Sleep apnea is a sleep disorder that occurs when breathing is repeatedly interrupted during sleep. This condition can lead to various serious health problems if left untreated, such as high blood pressure, poor sleep quality, and difficulty concentrating. Sufferers often do not realize they have sleep apnea because it occurs during sleep. Generally, diagnosis is made through interviews with the patient and their family to identify common symptoms such as snoring, and then confirmed through physical examination and Polysomnography (PSG). Since sleep apnea is related to respiratory activity that correlates with changes in cardiac activity, electrocardiogram (ECG) examination during sleep serves as an alternative diagnostic method. Therefore, this study presents a comparative analysis of deep learning models for detecting sleep apnea from spectrogram-based ECG representations. The raw ECG signals were transformed into spectrograms and then saved as images for classification into normal and abnormal categories. Deep learning (DL) methods were applied for the classification of normal and sleep apnea ECGs. EfficientNet, MobileNetV2, DenseNet, AlexNet, and VGG16 were used to evaluate the performance of the proposed method and identify the best-performing model. The evaluation results show that EfficientNet achieved the highest performance with an accuracy of 91.01%, precision of 90.70%, recall of 95.76%, and an F1-score of 92.61%. EfficientNet outperformed the other evaluated models in this study. By utilizing a spectrogram-based approach combined with a scalable architecture, the method demonstrates competitive accuracy for sleep apnea detection. Exploring other approaches to further improve accuracy remains an interesting direction for future research
Precise Electrocardiogram Signal Analysis Using ResNet, DenseNet, and XceptionNet Models in Autistic Children Yunidar, Yunidar; Melinda, Melinda; Albahri, Albahri; Ramadhani, Hanum Aulia; Dimiati, Herlina; Basir, Nurlida
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 4 (2025): October
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i4.1044

Abstract

In autistic children, one of the important physiological aspects to be examined is the heart condition, which can be assessed through electrocardiogram (ECG) signal analysis. However, ECG signals in autistic children often contain interference in the form of noise, making the analysis process, both manual and conventional, challenging. Therefore, this study aims to analyze the ECG signals of autistic children using a classification method to distinguish between two main conditions: playing and calm conditions. A deep learning approach employing the Convolutional Neural Network (CNN) architectures was used to obtain accurate results in distinguishing the heart conditions of autistic children. The data used consists of 700 ECG signal data in each class, processed through the filtering, windowing, and augmentation stages to obtain balanced data. Three CNN architectures, ResNet, DenseNet, and XceptionNet, were tested in this study. Although these architectures are originally designed for 2D and 3D image data, modifications were made to adapt the input data structure to perform 1D data calculations. The evaluation results show that the XceptionNet model achieved the best performance, with accuracy, precision, recall, and F1-score of 97,14% each, indicating a good ability in capturing the complex patterns of ECG signals. Meanwhile, the ResNet obtained good results with 96,19% accuracy, while DenseNet performed slightly lower results with 94,76% accuracy and evaluation metrics. Overall, this study demonstrates that a deep CNN architecture based on dense connections can enhance the accuracy of ECG signal classification in autistic children.