cover
Contact Name
Triwiyanto
Contact Email
triwiyanto123@gmail.com
Phone
+628155126883
Journal Mail Official
editorial.jeeemi@gmail.com
Editorial Address
Department of Electromedical Engineering, Poltekkes Kemenkes Surabaya Jl. Pucang Jajar Timur No. 10, Surabaya, Indonesia
Location
Kota surabaya,
Jawa timur
INDONESIA
Journal of Electronics, Electromedical Engineering, and Medical Informatics
ISSN : -     EISSN : 26568632     DOI : https://doi.org/10.35882/jeeemi
The Journal of Electronics, Electromedical Engineering, and Medical Informatics (JEEEMI) is a peer-reviewed open-access journal. The journal invites scientists and engineers throughout the world to exchange and disseminate theoretical and practice-oriented topics which covers three (3) majors areas of research that includes 1) Electronics, 2) Biomedical Engineering, and 3)Medical Informatics (emphasize on hardware and software design). Submitted papers must be written in English for an initial review stage by editors and further review process by a minimum of two reviewers.
Articles 295 Documents
Adaptive Threshold-Enhanced Deep Segmentation of Acute Intracranial Hemorrhage and its Subtypes in Brain CT Images Suganthi, R.; Yalagi, Pratibha C. Kaladeep; Chowdhury, Rini; Kumar, Prashant; Sharmila, D.; Krishna, Kunchanapalli Rama
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 4 (2025): October
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i4.1048

Abstract

Accurate segmentation of acute intracranial haemorrhage (ICH) in brain computed tomography (CT) scans is crucial for timely diagnosis and effective treatment planning. While the RSNA Intracranial Hemorrhage Detection dataset provides a substantial amount of labeled CT data, most prior research has focused on slice-level classification rather than precise pixel-level segmentation. To address this limitation, a novel segmentation pipeline is proposed that combines a 2.5D U-Net architecture with a dynamic adaptive thresholding technique for enhanced delineation of hemorrhagic lesions and their subtypes. The 2.5D U-Net model leverages spatial continuity across adjacent slices to generate initial lesion probability maps, which are subsequently refined using an adaptive thresholding method that adjusts based on local pixel intensity histograms and edge gradients. Unlike fixed global thresholding approaches such as Otsu’s method, the proposed technique dynamically varies thresholds, enabling more accurate differentiation between hemorrhagic tissue and surrounding brain structures, especially in challenging cases with diffuse or overlapping boundaries. The model was evaluated on carefully selected subsets of the RSNA dataset, achieving a mean Dice similarity coefficient of 0.82 across all ICH subtypes. Compared to standard U-Net and DeepLabV3+ architectures, the hybrid approach demonstrated superior accuracy, boundary precision, and fewer false positives. Visual analysis confirmed more precise lesion delineation and better correspondence with manual annotations, particularly in low-contrast or complex anatomical regions. This integrated approach proves effective for robust segmentation in clinical environments. It holds promise for deployment in computer-aided diagnosis systems, providing radiologists and neurosurgeons with a reliable tool for comprehensive ICH assessment and enhanced decision-making during emergency care
Vision Language Transformer Framework for Efficient Cancer Diagnosis through Multimodal Integration Gutam, Bala Gangadhara; Malchi, Sunil Kumar
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 4 (2025): October
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i4.1075

Abstract

Finding and treating cancer as soon as possible help patients get better outcomes. Patients requiring imaging or biopsy tests sometimes find it challenging to access them because these procedures are often limited by their high cost and availability in clinical settings. Recent AI methods, particularly those involving deep learning, can address these problems and significantly enhance the process for detecting cancer, offering greater efficiency and scalability. In this context, LLMs and VLMs are considered leading solutions for trying to make sense of multimodal variables within AI-driven healthcare systems. Although LLMs are strong at working with unstructured, clinically related text data, they have not often been used for patient assessment beyond descriptive or summarization tasks, by combining images and descriptions, along with both structured and unstructured data. The VLMs allow doctors and medical researchers to catch cancer symptoms from multiple angles. In this work, we study both LLMs and VLMs in cancer detection, analyzing their architectures, learning mechanisms, and performance on various datasets, and identifying directions for expanding multimodal AI in healthcare. Our results indicate that combining these two data types enhances how accurately we are able to diagnose patients across different types of cancer. Our studies in MIMIC-III, MIMIC-IV, TCGA, and CAMELYON 16/17 datasets revealed that multimodal transformer models significantly improve the accuracy of diagnosing biopsy results. In particular, BioViL achieves an AUC-ROC of 0.92 for detecting lung cancer, whereas CLIP Fine-tuned achieves a similar result of 0.91 for colon cancer detection.
Breast Cancer Classification Using z-score Thresholding and Machine Learning Yildirim, Mustafa Eren; Salman, Yucel B.
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 4 (2025): October
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i4.1165

Abstract

Image processing and machine learning are being used in biomedical applications as supporting tools for the detection and diagnosis of certain diseases. Breast cancer is one of these diseases that researchers have devoted great effort to for decades. To accomplish this task, image-based and feature-based public datasets are available for use. Due to several factors such as hardware limitations or preprocessing, images can become noisy. The noise in images, which can lead to anomalies or outliers in the dataset, may decrease detection accuracy and mislead medical staff during the diagnostic stage. Therefore, this study aims to present the effect of removing outliers from the dataset on the detection accuracy of breast cancer. The proposed method removes outliers detected through z-score analysis. The remaining data are normalized, and the classification accuracies of ten methods are obtained through direct implementation. The methods include XGBoost, Neural Network, CNN, RNN, AdaBoost, LSTM, GRU, Random Forest, SVM, and Logistic Regression. The public dataset Wisconsin Diagnostic Breast Cancer (WDBC) was used in this study. An ablation study was conducted by fine-tuning the threshold value of the z-score method. The results showed that the best accuracy was obtained when the threshold value was set to 3. Additionally, a comparison was made between the results obtained using the entire dataset and the dataset after outlier removal. The results showed that the average accuracy of all classifiers was 98.08%. In conclusion, the findings indicate that removing outliers from the dataset increases the overall accuracy of breast cancer detection
BTISS-WNET: Deep Learning-based Brain Tissue Segmentation using Spatio Temporal WNET Shaik Ali Gousia Banu, Athur; Hazra, Sumit
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 8 No 1 (2026): January
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v8i1.808

Abstract

Brain tissue segmentation (BTISS) from magnetic resonance imaging (MRI) is a critical process in neuroimaging, aiding in the analysis of brain morphology and facilitating accurate diagnosis and treatment of neurological disorders. A major challenge in BTISS is intensity inhomogeneity, which arises from variations in the magnetic field during image acquisition. This results in non-uniform intensities within the same tissue class, particularly affecting white matter (WM) segmentation. To address this problem, we propose an efficient deep learning-based framework, BTISS-WNET, for accurate segmentation of brain tissues. The main contribution of this work is the integration of a spatio-temporal segmentation strategy with advanced pre-processing and feature extraction to overcome intensity inconsistency and improve tissue differentiation. The process begins with skull stripping to eliminate non-brain tissues, followed by Empirical Wavelet Transform (EWT) for noise reduction and edge enhancement. Data augmentation techniques, including random rotation and flipping, are applied to improve model generalization. The preprocessed images are fed into Res-GoogleNet (RGNet) to extract deep semantic features. Finally, a Spatio-Temporal WNet is used for precise WM segmentation, leveraging spatial and temporal dependencies for improved boundary delineation. The proposed BTISS-WNET model achieves a segmentation accuracy of 99.32% for white matter. It also demonstrates improved accuracy of 1.76%, 18.23%, and 16.02% over DDSeg, BISON, and HMRF-WOA, respectively. In conclusion, BTISS-WNET provides a robust and high-accuracy framework for WM segmentation in MRI images, with promising applications in clinical neuroimaging. Future work will focus on validating the model using real clinical datasets and extending it to multi-tissue and multi-modal MRI segmentation
A Neuro-Physiological Diffusion Model for Accurate EEG-Based Psychiatric Disorder Classification Gopal, Pradeep; M, Abbinayaa; Subashini, S; Nagaraj, Mathivanan; Banu, N Nasiya Niwaz; Thumbur, Gowri
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 8 No 1 (2026): January
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v8i1.1131

Abstract

Identification of psychiatric conditions such as depression, schizophrenia, anxiety, and obsessive-compulsive disorder (OCD) from Electroencephalography (EEG) data remains a significant challenge due to the complexity of neurophysiological patterns. While Generative Adversarial Networks (GANs) have been explored to augment EEG datasets and enhance classifier performance, they often suffer from limitations including training instability, mode collapse, and the generation of physiologically implausible EEG samples. These shortcomings hinder their applicability in high-stakes clinical decision-making, where reliability and physiological coherence are critical. This study aims to address the above-mentioned challenges by proposing a novel Neuro-Physiologically Constrained Diffusion Framework (NPC-DiffEEG). This framework leverages the strengths of conditional diffusion models while integrating domain-specific neurophysiological constraints, ensuring that generated EEG signals preserve key properties, such as frequency band structures and inter-channel connectivity patterns, both of which are essential for accurate mental disorder classification. The NPC-DiffEEG-generated data is combined with real EEG features and processed using a multi-task attention-based transformer, enabling the model to learn robust, cross-disorder representations. Extensive experiments conducted on a publicly available multi-disorder EEG dataset demonstrate that NPC-DiffEEG significantly outperforms traditional GAN-based augmentation approaches. The model achieves an impressive average classification accuracy of 96.8%, along with superior F1-scores and AUC values across all disorder categories. Furthermore, integrating attention-based disorder attribution not only enhances interpretability but also reduces overfitting, thereby improving generalizability to unseen subjects. This innovative approach marks a substantial advancement in EEG-based classification of psychiatric disorders, bridging the gap between synthetic data generation and clinically reliable decision-support systems.
Rule-Based Adaptive Chatbot on WhatsApp for Visual, Auditory, and Kinesthetic Learning Style Detection Rahulil, Muhammad; Yamasari, Yuni; Putra, Ricky Eka; Suartana, I made; Qoiriah, Anita
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 8 No 1 (2026): January
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v8i1.1215

Abstract

Adapting learning methods to individual learning styles remains a major challenge in digital education due to the static nature of traditional questionnaires and the absence of adaptive feedback mechanisms. This study aimed to develop a rule-based adaptive WhatsApp chatbot capable of automatically identifying users’ learning styles, visual, auditory, and kinesthetic, through a weighted questionnaire enhanced with probabilistic refinement. The proposed system introduces an adaptive decision framework that dynamically manages conversation flow using score dominance evaluation, early termination, and selective question expansion. Bayesian posterior probability estimation is employed to strengthen decision confidence in borderline cases, ensuring consistent and interpretable results even when user responses are ambiguous. The chatbot was implemented using WhatsApp-web.js and MongoDB, supported by session validation and activity log monitoring to ensure operational reliability and data integrity. System validation involved white-box testing using Cyclomatic Complexity to verify logical accuracy and 20-fold cross-validation using a Support Vector Machine (SVM) to evaluate classification performance. The adaptive model achieved an accuracy of 80.2% and an AUC of 0.902, supported by a balanced precision (0.738), recall (0.662), and F1-score (0.698). These results demonstrate stable discriminative capability and confirm that the adaptive scoring mechanism effectively reduces redundant questioning, lowers cognitive load, and improves interaction efficiency without compromising reliability. In conclusion, the study successfully achieved its objective of developing an adaptive, efficient, and mathematically transparent learning style detection system. The findings confirm that adaptive rule-based logic reinforced by probabilistic reasoning can significantly enhance the efficiency and reliability of digital learning assessments. Future research will extend this framework by incorporating multimodal behavioral indicators and personalized learning content to further strengthen adaptive learning support
Deep Learning Based Ovarian Cancer Classification Using EfficientNetB2 with Attention Mechanism Kolekar, Jayashri; Pawar, Chhaya; Pande, Amol; Raut, Chandrashekhar
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 8 No 1 (2026): January
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v8i1.1216

Abstract

Ovarian cancer is a gynecological malignancy comprising multiple histopathological subtypes. Traditional diagnostic tools like histopathology and CA-125 tests suffer from limitations, including inter-observer variability, low specificity, and time-consuming procedures, often leading to delayed or incorrect diagnoses, which are subjective and error-prone. Conventional machine learning models, such as K-Nearest Neighbors (KNN) and Support Vector Machine (SVM), have been applied but often struggle with high-dimensional image data and fail to extract deep morphological features. This study proposes a DL-based framework to classify ovarian cancer subtypes from histopathological images, aiming to enhance diagnostic accuracy and clinical decision-making. Initially, Deep learning was applied using pre-trained architectures such as VGG-16, Xception, and EfficientNetB2. However, the standout innovation in this study is the integration of EfficientNetB2 with Convolutional Block Attention Module (CBAM), an attention mechanism module. An attention mechanism allows the model to focus on the most informative regions of the image, thereby improving diagnostic precision. The proposed system was trained and validated on a diverse, well-structured dataset, achieving high accuracy and strong generalization capability. EfficientNetB2 with CBAM outperformed other models, achieving a 91% accuracy rate compared to 52% for VGG-16, 72% for Xception, and 82% for the baseline EfficientNetB2 model. This attention-enhanced, scalable AI model demonstrates strong potential for clinical application. It provides faster and more efficient classification of ovarian cancer subtypes compared to conventional approaches. The framework has the potential to improve survival outcomes for patients with ovarian cancer. The proposed system demonstrates a significant improvement in ovarian cancer subtype classification (High-Grade Serous Carcinoma, Low-Grade Serous Carcinoma, Clear-Cell, Endometrioid, and Mucinous Carcinoma). It provides a practical tool for aiding early diagnosis and treatment planning, with potential for integration into clinical workflows.
Graph-Theoretic Analysis of Electroencephalography Functional Connectivity Using Phase Lag Index for Detection of Ictal States Rathod, Ghansyamkumar; Modi, Hardik
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 8 No 1 (2026): January
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v8i1.1230

Abstract

Epileptic disorders are characterized by the misfiring of neurons and affect 50 million people worldwide, who have to live with physical challenges in their normal lives. The ionic activity of the brain can be detected as an electrical activity from the scalp using a non-invasive bio-potential measurement technique known as electroencephalography (EEG). Manual interpretation of brainwaves is a time-consuming, expert-intensive task. In recent years, AI has achieved remarkable results, but at the cost of large datasets and high processing power. We used publicly available online datasets from the Children’s Hospital Boston (CHB) in collaboration with the Massachusetts Institute of Technology (MIT). The datasets consisted of 23 bipolar channels that included pre-processed epochs of both normal and pre-labeled seizure (ictal) states. Using the Phase Lag Index (PLI), the functional connectivity of the network was built to record consistent phase synchronization while minimizing artifacts from volume conduction. Graph-theory-based features were used to detect the brain's seizure state. A significant increase in the values of graph theoretical features, such as degree centrality and clustering coefficient, was observed, along with the formation of hyper-connected hubs and disrupted brain communication in the ictal state. Statistical tests (T-tests, ANOVA, Mann-Whitney U) across multiple PLI thresholds confirmed consistent significant differences (p-value < 0.05) between normal and ictal conditions. This study aims to provide a method based on graph theory, which is computationally efficient, interpretable, and suitable for real-time seizure detection. Considering the efficiency of clustering coefficient and degree of centrality, we can say that they are useful biomarkers for biomedical applications.
DCRNet: Hybrid Deep Learning Architecture for Forecasting of Blood Glucose Lad, Ketan; Joshi, Maulin
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 8 No 1 (2026): January
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v8i1.1245

Abstract

Maintaining blood glucose (BG) levels within the euglycemic range is essential for patients with type 1 diabetes (T1D) to prevent both hypoglycemia and hyperglycemia. Often, BG concentration changes due to unannounced carbohydrate intake during meals or an inappropriate amount of insulin dosage. Timely forecasting of BG can help take appropriate actions in advance to keep BG within the euglycemic range. Recent studies indicate that deep learning techniques have demonstrated improved performance in this field. Deep learning approaches often struggle to precisely predict future BG levels. To address these challenges, this paper introduces a novel hybrid deep learning architecture called DCRNet. This architecture incorporates a Dilated Convolution layer that effectively detects multi-scale patterns while minimizing parameter count. Additionally, it utilizes Long Short-Term Memory (LSTM) to handle contextual dependencies and maintain the temporal order of the extracted features. DCRNet predicts future BG levels for short-term durations (15, 30, and 60 minutes) using information on glucose, meals, and insulin dosages. The proposed architecture’s performance is evaluated on 11 simulated subjects from the UVA/Padova T1D Mellitus simulator and 12 actual subjects from the OhioT1DM dataset. In contrast to previous works, the proposed architecture achieves root mean square errors (RMSEs) of 3.42, 6.45, and 17.73 mg/dL for simulated subjects and 12.57, 20.72, and 34.41mg/dL for actual subjects, for prediction horizons (PH) of 15-, 30-, and 60-minute, respectively. The proposed architecture is also evaluated using the mean absolute error (MAE), which is 2.11, 4.47, and 11.78 mg/dL for simulated subjects and 7.9, 14.13, and 25.5 mg/dL for actual subjects, for 15-, 30-, and 60-minute PH. The experimental findings validate that the proposed architecture, which uses a dilated convolutional LSTM, outperforms other recent state-of-the-art models.
Comparative Analysis of YOLO11 and Mask R-CNN for Automated Glaucoma Detection Fayyadh, Muhammad Naufaldi; Saragih, Triando Hamonangan; Farmadi, Andi; Mazdadi, Muhammad Itqan; Herteno, Rudy; Abdullayev, Vugar
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 8 No 1 (2026): January
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v8i1.1266

Abstract

Glaucoma is a progressive optic neuropathy and a major cause of irreversible blindness. Early detection is crucial, yet current practice depends on manual estimation of the vertical Cup-to-Disc Ratio (vCDR), which is subjective and inefficient. Automated fundus image analysis provides scalable solutions but is challenged by low optic cup contrast, dataset variability, and the need for clinically interpretable outcomes. This study aimed to develop and evaluate an automated glaucoma screening pipeline based on optic disc (OD) and optic cup (OC) segmentation, comparing a single-stage model (YOLO11-Segmentation) with a two-stage model (Mask R-CNN with ResNet50-FPN), and validating it using vCDR at a threshold of 0.7. The contributions are fourfold: establishing a benchmark comparison of YOLO11 and Mask R-CNN across three datasets (REFUGE, ORIGA, G1020); linking segmentation accuracy to vCDR-based screening; analyzing precision–recall trade-offs between the models; and providing a reproducible baseline for future studies. The pipeline employed standardized preprocessing (optic nerve head cropping, resizing to 1024×1024, conservative augmentation). YOLO11 was trained for 200 epochs, and Mask R-CNN for 75 epochs. Evaluation metrics included Dice, Intersection over Union (IoU), mean absolute error (MAE), correlation, and classification performance. Results showed that Mask R-CNN achieved higher disc Dice (0.947 in G1020, 0.938 in REFUGE) and recall (0.880 in REFUGE), while YOLO11 attained stronger vCDR correlation (r = 0.900 in ORIGA) and perfect precision (1.000 in G1020). Overall accuracy exceeded 0.92 in REFUGE and G1020. In conclusion, YOLO11 favored conservative screening with fewer false positives, while Mask R-CNN improved sensitivity. These complementary strengths highlight the importance of model selection by screening context and suggest future research on hybrid frameworks and multimodal integration