cover
Contact Name
Triwiyanto
Contact Email
triwiyanto123@gmail.com
Phone
+628155126883
Journal Mail Official
editorial.jeeemi@gmail.com
Editorial Address
Department of Electromedical Engineering, Poltekkes Kemenkes Surabaya Jl. Pucang Jajar Timur No. 10, Surabaya, Indonesia
Location
Kota surabaya,
Jawa timur
INDONESIA
Journal of Electronics, Electromedical Engineering, and Medical Informatics
ISSN : -     EISSN : 26568632     DOI : https://doi.org/10.35882/jeeemi
The Journal of Electronics, Electromedical Engineering, and Medical Informatics (JEEEMI) is a peer-reviewed open-access journal. The journal invites scientists and engineers throughout the world to exchange and disseminate theoretical and practice-oriented topics which covers three (3) majors areas of research that includes 1) Electronics, 2) Biomedical Engineering, and 3)Medical Informatics (emphasize on hardware and software design). Submitted papers must be written in English for an initial review stage by editors and further review process by a minimum of two reviewers.
Articles 287 Documents
Development of Human Activity Recognition (HAR) for Health Rehabilitation Using MMWAVE Radar with 3D Point Cloud Data Yudha Setyawan, Raden Rofiq; Fiky Y. Suratman; Khilda Afifah
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 2 (2025): April
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i2.642

Abstract

Postoperative recovery is a crucial phase in ensuring successful rehabilitation. However, many healthcare facilities face challenges due to the limited availability of medical personnel, making routine patient monitoring difficult. This limitation can delay the early detection of complications and reduce overall recovery effectiveness. To address this issue, this study proposes a non-invasive, radar-based system for remote postoperative patient monitoring. The proposed system utilizes the IWR6843AOP radar to generate 3D point cloud data, spatially representing patient movements. This approach enables continuous monitoring without compromising patient privacy, allowing healthcare providers to offer more efficient care. The collected data undergoes preprocessing, including normalization, labeling, and dataset splitting, before being classified using deep learning models such as 3D CNN, 3D CNN+LSTM, 3D CNN+Bi-LSTM, PointNet, PointNet++, and RNN. The dataset consists of six activity categories: empty space, sitting, standing, walking, running, and squatting, recorded at a frame frequency of 18.18 Hz. Experimental results show that the 3D CNN combined with Bi-LSTM achieves the highest accuracy of 90%, surpassing models like PointNet and RNN. These findings indicate that a radar-based and deep learning-driven approach offers an accurate, efficient, and non-intrusive solution for postoperative monitoring, reducing the need for direct medical supervision. This technology has significant potential for broader healthcare applications, contributing to more advanced, accessible, and technology-driven patient monitoring systems. By integrating artificial intelligence and radar sensing, this research paves the way for innovative solutions in modern healthcare, ensuring better postoperative outcomes while optimizing medical resources.
A Novel Deep Learning Framework for Enhanced Glaucoma Detection Using Attention-Gated U-Net, Deep Wavelet Scattering, and Vision Transformers V, Krishnamoorthy; S, Sivanantham; V, Akshaya; S, Nivedha; Depuru, Sivakumar; M, Manikandan
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 2 (2025): April
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i2.706

Abstract

Globally, Glaucoma is a major cause of permanent blindness, and maintaining eyesight depends on early detection. Here, a brand-new deep-learning system for glaucoma prediction. In this work, we offer a novel deep-learning approach for enhanced glaucoma prediction that uses a denoising generative adversarial network for preprocessing the input image is provided, later the segmentation is carried out by Attention-Gated U-Net with Dilated Convolutions to segment the optic cup and optic disc. Feature Extraction Using a Deep Wavelet Scattering Network and finally the glaucoma classification is carried out by the Vision Transformers. An attention-gated U-Net with dilated convolutions for segmentation, which improves the accuracy of optic disc and cup boundaries by 7% compared to conventional U-Net methods is introduced. A Deep Wavelet Scattering Network (DWSN) for feature extraction that achieves a 5% improvement in feature discrimination over conventional CNNs by capturing multiscale texture and structural information is suggested. Lastly, ViT, which is based on transfer learning, is used for classification; it has a 94.6% accuracy rate, a 93.8% sensitivity rate, and a 95.2% specificity rate. The suggested approach outperformed CNN-based models by improving by about 4% on all criteria. The system achieved an F1 score of 0.95 and an AUC (Area Under Curve) of 0.96 when tested on publicly accessible glaucoma datasets. Multi-stage deep-learning processing for glaucoma prediction by integrating a denoising generative adversarial network for image preprocessing, Attention-Gated U-Net with Dilated Convolutions for exact optic cup and disc segmentation, deep wavelet scattering for feature extraction, and Vision Transformers for glaucoma classification.
Hybrid Fuzzy Logic and Metaheuristic Optimized Trinetfusion Model for Liver Tumor Segmentation Mohammed Ashik; Patrick, Arun; D. Dennis Ebenezer; Rini Chowdhury; Prashant Kumar; Ida, S. Jhansi
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 2 (2025): April
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i2.657

Abstract

Liver tumor segmentation plays a vital role in medical imaging, enabling accurate diagnosis and precise treatment planning for liver cancer. Traditional methods such as threshold-based techniques and region-growing algorithms have been explored, and more recently, deep learning models have shown promise in automating and improving segmentation tasks. However, these approaches often face significant limitations, including challenges in accurately delineating tumor boundaries, high sensitivity to noise, and the risk of overfitting, especially when dealing with complex tumor structures and limited annotated data. To overcome these limitations, a novel Hybrid Fuzzy Logic and Metaheuristic Optimized TriNetFusion Model is proposed. This model integrates the strengths of fuzzy logic, metaheuristic optimization, and deep learning to deliver a more reliable and adaptable segmentation framework. Fuzzy logic is utilized to handle the inherent uncertainty and ambiguity in medical images, particularly in tumor boundary regions where intensity variations are subtle and complex. Metaheuristic optimization algorithms are employed to fine-tune the parameters of the segmentation model effectively, ensuring a more generalized and adaptive performance across different datasets. At the core of the model lies TriNetFusion, a multi-branch deep learning architecture that fuses complementary features extracted at various levels. The fusion of these multi-level features contributes to robust segmentation by capturing both global and local image characteristics. This model is specifically designed to adapt to irregular and complex tumor shapes, significantly reducing false positives and improving boundary precision. Experimental validation using benchmark liver tumor datasets demonstrates that the proposed model achieves a segmentation accuracy of 96% with a low loss value of 0.2, indicating strong generalization without overfitting. The hybrid approach not only enhances segmentation precision but also ensures robustness and adaptability, making it a highly promising solution for liver tumor segmentation in clinical practice.
Grad-CAM based Visualization for Interpretable Lung Cancer Categorization using Deep CNN Models Mothkur, Rashmi; Soubhagyalakshmi, Pullagura; C. B., Swetha
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 3 (2025): July
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i3.690

Abstract

The Grad-CAM (Gradient-weighted Class Activation Mapping) technique has loomed as a crucial tool for elucidating deep learning models, particularly convolutional neural networks (CNNs), by visually accentuating the regions of input images that accord most to a model's predictions. In the context of lung cancer histopathological image classification, this approach provides discernment into the decision-making process of models like InceptionV3, XceptionNet, and VGG19. These CNN architectures, renowned for their high performance in image categorization tasks, can be leveraged for automated diagnosis of lung cancer from histopathological images. By applying Grad-CAM to these models, heatmaps can be generated that divulge the areas of the tissue samples most influential in categorizing the images as lung adenocarcinomas, squamous cell carcinoma, and benign patches. This technique allows for the visualization of the network's focus on specific regions, such as cancerous cells or abnormal tissue structures, which may otherwise be difficult to explicate. Using pre-trained models fine-tuned for the task, the Grad-CAM method assesses the gradients of the target class concerning the final convolutional layer, generating a heatmap that can be overlaid on the input image. The results of Grad-CAM for InceptionV3, XceptionNet, and VGG19 offer distinct insights, as each model has unique characteristics. InceptionV3 pivots on multi-scale features, XceptionNet apprehend deeper patterns with separable convolutions, and VGG19 emphasizes simpler, more global attributes. By justaposing the heatmaps generated by each architecture, one can assess the model’s focus areas, facilitating better comprehension and certainty in the model's prophecy, crucial for clinical applications. Ultimately, the Grad-CAM approach not only intensify model transparency but also aids in ameliorating the interpretability of lung cancer diagnosis in histopathological image categorization.
Enhancing Skin Cancer Classification with Mixup Data Augmentation and Efficientnet D, Shamia; Umapriya, R.; Prasad, M. L. M.; Rini Chowdhury; Prashant Kumar; K.Vishnupriya
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 2 (2025): April
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i2.703

Abstract

Skin lesion classification and segmentation are two crucial tasks in dermatological diagnosis, here automated approaches can significantly aid in early detection and improve treatment planning. The proposed work presents a comprehensive framework that integrates K-means clustering for segmentation, Mixup augmentation for data enhancement, and the EfficientNet B7 model for classification. Initially, K-means clustering is applied as a pre-processing step to accurately segment the lesion regions from the background, ensuring that the model focuses on processing the most relevant and informative features. This segmentation enhances the model’s ability to differentiate between subtle lesion boundaries and surrounding skin textures. To address the common issue of class imbalance and to improve the overall robustness of the classification model, Mixup augmentation is employed. This technique generates synthetic samples by linearly interpolating between pairs of images and their corresponding labels, effectively enriching the training dataset and promoting better generalization. For the classification task, EfficientNet B7 is utilized due to its superior feature extraction capabilities, optimized scalability, and excellent performance across various image recognition challenges. The entire pipeline was evaluated on a dataset comprising 10,015 dermatoscopic images covering seven distinct categories of skin lesions. The proposed method achieved outstanding performance, demonstrating a precision rate of 95.3% and maintaining a low loss of 0.2 during evaluation. Compared to traditional machine learning and earlier deep learning approaches, the proposed framework showed significant improvements, particularly in handling complex patterns and imbalanced datasets, making it a promising solution for real-world clinical deployment in dermatology.
Advancement of Lung Cancer Diagnosis with Transfer Learning: Insights from VGG16 Implementation Lakide, Vedavrath; Ganesan, V.
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 3 (2025): July
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i3.704

Abstract

Lung cancer continues to be one of the leading causes of cancer-related mortality globally, largely due to the challenges associated with its early and accurate detection. Timely diagnosis is critical for improving survival rates, and advances in artificial intelligence (AI), particularly deep learning, are proving to be valuable tools in this area. This study introduces an enhanced deep learning-based approach for lung cancer classification using the VGG16 neural network architecture. While previous research has demonstrated the effectiveness of ResNet-50 in this domain, the proposed method leverages the strengths of VGG16 particularly its deep architecture and robust feature extraction capabilities to improve diagnostic performance. To address the limitations posed by scarce labelled medical imaging data, the model incorporates transfer learning and fine-tuning techniques. It was trained and validated on a well-curated dataset of lung CT images. The VGG16 model achieved a high training accuracy of 99.09% and a strong validation accuracy of 95.41%, indicating its ability to generalize well across diverse image samples. These results reflect the model’s capacity to capture intricate patterns and subtle features within medical imagery, which are often critical for accurate disease classification. A comparative evaluation between VGG16 and ResNet-50 reveals that VGG16 outperforms its predecessor in terms of both accuracy and reliability. The improved performance underscores the potential of the proposed approach as a reliable and scalable AI-driven diagnostic solution. Overall, this research highlights the growing role of deep learning in enhancing clinical decision-making, offering a promising path toward earlier detection of lung cancer and ultimately contributing to better patient outcomes.
Performance Evaluation of Classification Algorithms for Parkinson’s Disease Diagnosis: A Comparative Study Baruah, Dhiraj; Rehman, Rizwan; Bora, Pranjal Kumar; Mahanta, Priyakshi; Dutta, Kankana; Konwar, Pinakshi
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 3 (2025): July
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i3.713

Abstract

Selection and implementation of classification algorithms along with proper preprocessing methods are important for the accuracy of predictive models. This paper compares some well-known and frequently used algorithms for classification tasks and performs in depth analysis. In this study we analyzed four most frequently used algorithm viz random forest (RF), decision tree (DT), logistic regression (LR) and support vector machine (SVM). To conduct the study on the well-known Oxford Parkinson’s disease Detection dataset obtained from the UCI Machine Learning Repository. We evaluated the algorithms' performance using six distinct approaches. Firstly, we used the classifiers where we didn’t used any method to enhance the performance of the classifier. Secondly, we applied Principal Component Analysis (PCA) to minimize the dimensionality of the dataset. Thirdly, we used collinearity-based feature elimination (CFE) method where we applied correlation among the features and if the correlation between a pair of features exceeds the threshold of 0.9, we eliminated one from the pair. Fourthly, we adopt synthetic minority oversampling technique (SMOTE) to synthetically increase the instances of the minority class. Fifth, we combined PCA+SMOTE and on sixth method, we combined CFE + SMOTE. The study demonstrates that SVM is highly effective for Parkinson’s disease classification. SVM maintained high accuracy, precision, recall and F1-score across various preprocessing techniques including PCA, CFE and SMOTE, making it robust and reliable for clinical applications. RF showed improved results with SMOTE. However, it experienced reduced performance with PCA and CFE, indicating its dependence on original feature interactions. DT benefited from PCA, while LR showed limited improvements and sensitivity to oversampling. These findings emphasize the importance of selecting appropriate preprocessing techniques to enhance model performance.
Applied Machine Learning in EEG data Classification to Classify Major Depressive Disorder by Critical Channels Dhekane, Sudhir; Khandare, Anand
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 3 (2025): July
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i3.719

Abstract

The electroencephalogram (EEG) stands out as a promising non-invasive tool for assessing depression. However, the efficient selection of channels is crucial for pinpointing key channels that can differentiate between different stages of depression within the vast dataset. This study outcome a comprehensive strategy for optimizing EEG channels to classify Major Depressive Disorder (MDD) using machine learning (ML) and deep learning (DL) approaches, and monitor effect of central lobe channels. A thorough review underscores the vital significance of EEG channel selection in the analysis of mental disorders. Neglecting this optimization step could result in heightened computational expenses, squandered resources, and potentially inaccurate classification results. Our assessment encompassed a range of techniques, such as Asymmetric Variance Ratio (AVR), Amplitude Asymmetry Ratio (AAR), Entropy-based selection employing Probability Mass Function (PMF), and Recursive Feature Elimination (RFE) where, RFE exhibited superior performance, particularly in pinpointing the most pertinent EEG channels while including central lobe channels like Fz, Cz, and Pz. With this accuracy between 97 to 99% is recorded by Electroencephalography Neural Network (EEGNet). Our experimental findings indicate that, models using RFE achieved enhancement in accuracy to classifying depressive disorders across diverse classifiers: EEGNet (96%), Random Forest (95%), Long Short-Term Memory (LSTM: 97.4%), 1D-CNN with 95%, and Multi-Layer Perceptron (98%) irrespective of central lobe incorporation. A pivotal contribution of this research is the development of a robust Multilayer Perceptron (MLP) model trained on EEG data from 382 participants, achieved accuracy of 98.7%, with a perfect precision score of 1.00, F1-Score of 0.983, and a Recall-Score of 0.966, to make it an enhanced technique for depression classification. Significant channels identified include Fp1, Fp2, F7, F4, F8, T3, C3, Cz, T4, T5, and P3, offering critical insights about depression. Our findings shows that, optimized EEG channel selection via RFE enhances depression classification accuracy in the field of brain-computer interface.
Automated ICD Medical Code Generation for Radiology Reports using BioClinicalBERT with Multi-Head Attention Network D., Sasikala; N., Sarrvesh; J., Sabarinath; S., Theetchenya; S., Kalavathi
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 3 (2025): July
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i3.775

Abstract

International Classification of Diseases (ICD) coding plays a pivotal role in healthcare systems with its provision of a standard method for classifying medical diagnoses, treatments, and procedures. However, the process of manually applying ICD codes to clinical records is both time-consuming and error-prone, particularly considering the large magnitude of medical terminologies and the periodic changes to the coding system. This work introduces a Hierarchical Multi-Head Attention Network (HMHAN) that aims to automate ICD coding using domain-related embeddings with an attention mechanism. The proposed method uses BioClinicalBERT for feature extraction from clinical text and then a two-level attention mechanism to learn hierarchical dependencies between labels. BioClinicalBERT is pre-trained on large biomedical and clinical corpora that enable it to capture complex contextual relationships specific to medical language more effectively. The multi-head attention mechanism enables the model to focus on different parts of the input text simultaneously, learning intricate associations between medical terms and corresponding ICD codes at various levels. This method uses SMOTE (Synthetic Minority Oversampling Technique) based multi-label resampling to solve class imbalance. SMOTE generates synthetic examples for underrepresented classes, allowing the model to learn better from imbalanced data without overfitting. For this work, MIMIC-IV dataset of de-identified radiology reports and corresponding ICD codes are used. The performance of the model is assessed with F1 score, Hamming loss, and ROC-AUC metrics. Results obtained from the model with an F1 score of 0.91, Hamming loss of 0.07, and ROC-AUC of 0.92 show promising research directions to automate the ICD coding process. This system will improve the effectiveness of healthcare workflows by automating ICD code generation for advanced clinical care.
Breast Cancer Classification on Ultrasound Images Using DenseNet Framework with Attention Mechanism Azka, Hanina Nafisa; Wiharto, Wiharto; Suryani, Esti
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 3 (2025): July
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i3.779

Abstract

Breast cancer is one of the most prevalent and life-threatening diseases among women worldwide. Early detection of breast cancer being critical for increasing survival rates. Ultrasound image is commonly used for breast cancer screening due to its non-invasive, safe, and cost-effective. However, ultrasound images are often of low quality and have significant noise, which can hinder the effectiveness of classification models. This study proposes an enhanced breast cancer classification model that leverages transfer learning in combination with attention mechanisms to improve diagnostic performance. The main contribution of this research is the introduction of Dense-SASE, a novel architecture that combines DenseNet-121 with two powerful attention modules: Scaled-Dot Product Attention and Squeeze-and-Excitation (SE) Block. These mechanisms are integrated to improve feature representation and allow the model to focus on the most relevant regions of the ultrasound images. The proposed method was evaluated on a publicly available breast ultrasound image dataset, with classification performed across three categories: normal, benign, and malignant. Experimental results demonstrate that the Dense-SASE model achieves an accuracy of 98.29%, a precision of 97.97%, a recall of 98.98%, and an F1-score of 98.44%. Additionally, Grad-CAM visualizations demonstrated the model's capability to localize lesion areas effectively, avoiding non-informative regions, and confirming the model's interpretability. In conclusion, the Dense-SASE model significantly improves the accuracy and reliability of breast cancer classification in ultrasound images. By effectively learning and focusing on clinically relevant features, this approach offers a promising solution for computer-aided diagnosis (CAD) systems and has the potential to assist radiologists in early and accurate breast cancer detection.