cover
Contact Name
Triwiyanto
Contact Email
triwiyanto123@gmail.com
Phone
+628155126883
Journal Mail Official
editorial.jeeemi@gmail.com
Editorial Address
Department of Electromedical Engineering, Poltekkes Kemenkes Surabaya Jl. Pucang Jajar Timur No. 10, Surabaya, Indonesia
Location
Kota surabaya,
Jawa timur
INDONESIA
Journal of Electronics, Electromedical Engineering, and Medical Informatics
ISSN : -     EISSN : 26568632     DOI : https://doi.org/10.35882/jeeemi
The Journal of Electronics, Electromedical Engineering, and Medical Informatics (JEEEMI) is a peer-reviewed open-access journal. The journal invites scientists and engineers throughout the world to exchange and disseminate theoretical and practice-oriented topics which covers three (3) majors areas of research that includes 1) Electronics, 2) Biomedical Engineering, and 3)Medical Informatics (emphasize on hardware and software design). Submitted papers must be written in English for an initial review stage by editors and further review process by a minimum of two reviewers.
Articles 312 Documents
Non-Contact Heart Rate Detection Using FMCW Radar Based on 1-D Convolutional Neural Networks Diyah Widiyasari; Istiqomah Istiqomah; Fiky Yosef Suratman; Suto Setiyadi
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 8 No 2 (2026): April
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v8i2.1547

Abstract

Non-contact heart rate (HR) estimation using frequency-modulated continuous-wave (FMCW) radar has emerged as a promising solution for unobtrusive, continuous vital-sign monitoring. However, accurately extracting HR from radar signals remains challenging because of low-amplitude cardiac-induced chest vibrations, environmental clutter, motion artifacts, and system noise. Traditional signal processing techniques, such as bandpass filtering combined with fast Fourier transform (FFT) analysis, are commonly employed to estimate HR in the frequency domain. Nevertheless, these approaches are highly sensitive to noise and often struggle to robustly capture weak cardiac components, leading to unstable or inaccurate estimates. To address these limitations, this study proposes a non-contact HR estimation framework based on FMCW radar combined with a one-dimensional convolutional neural network (1D-CNN). A systematic radar signal preprocessing pipeline is developed, including range-bin selection, phase extraction, noise suppression, filtering, and structured data labeling, to construct learning-ready input features. The 1D-CNN model is designed to automatically learn discriminative temporal patterns associated with cardiac activity directly from preprocessed radar signals. The proposed method is evaluated using two datasets: a publicly available dataset and an independently acquired dataset collected under controlled conditions. Performance is benchmarked against conventional bandpass filtering- and FFT-based HR estimation methods. The experimental results demonstrate that the proposed 1D-CNN framework achieves more accurate and stable HR predictions. On the public dataset, MAE decreases from 17.96 to 6.09 BPM, RMSE from 21.28 to 7.34 BPM, and MedAE from 17.66 to 5.43 BPM. The independent dataset yields consistent gains, with MAE decreases from 14.05 to 5.45 BPM, RMSE from 18.05 to 6.84 BPM, and MedAE from 10.74 to 4.57 BPM. These results indicate that the proposed 1D-CNN framework can effectively estimate HR from radar signals and demonstrate its capability to operate across datasets acquired with different radar frequencies
Deep Electro-Impedance Analytics for Bone Mineral Profiling: A Rough-Fuzzy Neural Attention Model Aripin, Aripin; Wulandari, Mauldy Nawa Ayu; Agata, Eunike Laurensya; Kusuma, Zulhendra Adi; Susilo, Susilo; Wulandari, Sari Ayu
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 8 No 2 (2026): April
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v8i2.1434

Abstract

Electrochemical Impedance Spectroscopy (EIS) has emerged as a promising modality for non-invasive biomedical diagnostics, particularly for radiation-free monitoring tasks such as Bone Mineral Density (BMD) assessment. However, the high dimensionality, noise, and non-linear behavior of impedance signals pose significant challenges for accurate and interpretable prediction. This study introduces Hybrid Rough Set-Attention Network (HRSA-Net), a hybrid regression framework that combines Rough Set-based feature selection with a self-attention neural architecture to enable continuous BMD estimation directly from raw EIS data. The proposed framework employs Artificial Neural Network (ANN) and Transformer-based regression models to learn complex impedance-density relationships. Unlike prior studies that are limited to classification tasks or rely on indirect physiological indicators, HRSA-Net is explicitly designed for direct regression of real-valued BMD scores. The model performance is evaluated against reference measurements obtained from Dual-energy X-ray Absorptiometry (DXA), the current clinical gold standard for bone density assessment. Through a comprehensive series of ablation experiments, HRSA-Net achieves an R² of 0.834 using an attention-guided ANN backbone, demonstrating the critical contribution of both Rough Set reduction and attention mechanisms. Performance further improves to an R² of 0.855 when incorporating a Transformer regressor and Huber loss, indicating superior robustness and generalizability under varying signal conditions. Comparative analysis with state-of-the-art EIS-based learning approaches shows that the proposed pipeline consistently outperforms conventional neural models and statistical methods. Overall, HRSA-Net provides an interpretable, accurate, and scalable foundation for future portable EIS-based BMD diagnostic systems, offering a safer alternative to radiological methods such as DXA and enabling feasible deployment in primary or community healthcare settings
A Hybrid Quantum-Inspired Deep Learning Framework with Bio-Inspired Optimization for Cardiovascular Disease Prediction Rathinam, Vinoth; K, Valarmathi; A, Madhumathi; S D, Lalitha
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 8 No 2 (2026): April
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v8i2.1440

Abstract

The prognostication of cardiovascular diseases is paramount for the facilitation of early detection and enhancement of patient prognoses. It introduced a novel hybrid deep learning architecture that amalgamates Convolutional Neural Networks (CNN), Quantum Convolutional Neural Networks (Q-CNN), Long Short-Term Memory (LSTM) networks, Quantum-Inspired Long Short-Term Memory (Q-LSTM) models, Denoising Autoencoders (DAE), and Transformer Encoder–Decoder frameworks. The quantum models were innovatively structured by integrating unitary transformations and Hilbert space representations within traditional deep learning paradigms. Hyperparameter optimization, including learning rate, hidden unit count, dropout rates, and batch size, was executed utilizing the Greylag Goose Optimization (GGO) algorithm, which was meticulously chosen after initial benchmarking against conventional optimization techniques. These models underwent training and validation processes on a meticulously curated clinical dataset encompassing both demographic and clinical attributes, with preprocessing measures implemented to rectify missing data and address class imbalances. Among the array of assessed models, the GGO-optimized Q-LSTM exhibited superior performance, attaining an accuracy of 98.05% (95% CI: 96.8–99.2%), a precision of 1.00, a recall of 98.96%, an F1-score of 97.95%, and an AUC-ROC of 0.980. The DAE demonstrated an accuracy of 97.08% alongside an AUC-ROC of 0.989. Future research endeavors will focus on external validation and statistical significance testing to evaluate model generalization. Additionally, considerations regarding model interpretability through SHAP analysis and the practical deployment aspects (e.g., integration with Electronic Health Records) are thoroughly examined. This investigation underscores the assertion that the integration of deep learning methodologies, quantum-inspired modeling, and bio-inspired optimization strategies can markedly enhance predictive analytics for cardiovascular disease identification, while concurrently underscoring the critical importance of model interpretability and rigorous validation
Robust Brain Tumor MRI Classification Through MobileNetV3 Deep Feature Fusion and Principal Component Analysis Enhanced AdaBoost Learning Abdullah, Ahmed Aizaldeen; Hussein, Hadeel Safaa; Rahaim, Laith Ali Abdul
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 8 No 2 (2026): April
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v8i2.1462

Abstract

Among the most serious neurological diseases are brain tumors, which pose a challenge to early detection through MRI due to low contrast, tissue heterogeneity, and high-dimensional deep features that make it difficult for traditional classification models to be effective. This study proposes a robust and computationally efficient multi-class classification framework capable of distinguishing four tumor types: glioma, meningioma, pituitary tumor, and no tumor. The primary contributions are: (1) the development of a hybrid feature-learning pipeline that introduces a hybrid feature-learning framework in which a one-level 2D Discrete Wavelet Transform (2D-DWT) is employed as a multi-resolution preprocessing step to enhance MRI slices prior to deep feature extraction using MobileNetV3; (2) the application of Principal Component Analysis (PCA) to compress a 1,024-dimensional deep-feature vector into only 20 principal components, achieving a 99.96% reduction in dimensionality; (3) the use of an optimized AdaBoost ensemble specifically adapted for low-dimensional inputs; and (4) achieving performance that surpasses several published approaches evaluated on the same benchmark dataset. The proposed workflow includes cropping, normalization, and CLAHE enhancement, followed by 2D-DWT to extract LL, LH, HL, and HH sub-band information. The wavelet-refined MRI slices are processed by MobileNetV3 to implicitly encode spectral–textural information into deep semantic representations, which are subsequently reduced using PCA and classified by AdaBoost. Experiments conducted on a public Kaggle brain MRI dataset comprising 7023 images show that MobileNetV3 combined with 2D-DWT achieves an accuracy of 99.56%. When enhanced with PCA and AdaBoost, the full framework attains 99.94% accuracy, 99.95% precision, 99.96% recall, 99.94% F1-score, and 100% AUC, demonstrating remarkable tumor discrimination performance. In summary, the proposed PCA–AdaBoost hybrid framework offers a highly accurate, lightweight, and clinically promising solution for automated brain tumor MRI classification.
Robustness Under Attack: Assessing Adversarial Fragility in Deep Learning Models for COVID-19 Radiography Prediction Kamil, Muhammad Hisyam; Farma, Elga Putri Tri; Basuki, Setio
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 8 No 2 (2026): April
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v8i2.1506

Abstract

Deep learning, especially Convolutional Neural Network (CNN) architectures, has significantly improved medical image analysis for predicting lung diseases through chest X-ray (CXR) images, including pneumonia and COVID-19. However, despite achieving high diagnostic precision, CNN models remain highly susceptible to adversarial attacks, defined as small, visually imperceptible alterations optimized to exploit non-linear decision boundaries that cause high-confidence mispredictions. This vulnerability presents a critical concern in clinical settings, where deterministic diagnostic errors directly compromise patient safety. This paper systematically implements white-box adversarial attacks to quantify the resilience of CNN models in multi-class CXR image classification. This paper utilizes the COVID-19 Radiography Dataset, comprising four diagnostic categories: COVID-19, Lung Opacity, Normal, and Viral Pneumonia. A DenseNet-121 architecture was employed for feature extraction, and the trained model was subsequently subjected to Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD) attacks under varying L∞​-bounded epsilon settings. The empirical experiments reveal three critical findings: 1) The implementation of sub-pixel adversarial attacks causes severe performance degradation, where the PGD attack constrained at an epsilon of 0.1/255 reduced the global model accuracy from a baseline of 95.42% to 25.32%; 2) Iterative attacks (PGD) represent the absolute worst-case scenario for model reliability by efficiently discovering high-dimensional manifold gaps, whereas the model demonstrates relative resilience to linear, single-step FGSM perturbations; and 3) Gradient-weighted Class Activation Mapping (Grad-CAM) analysis verifies that this performance collapse is associated with a deterministic semantic shift, displacing the model's spatial attention from clinically relevant pulmonary regions toward spurious background noise. In conclusion, this paper empirically proves that despite exhibiting high accuracy on clean data, unprotected CNNs remain fundamentally unsafe for autonomous clinical deployment due to their acute vulnerability to gradient-based perturbations, necessitating the future integration of robust adversarial training frameworks
CNN-Based Facial Image Analysis for Pediatric Down Syndrome Classification Yunidar, Yunidar; Harahap, Inda Mariana; Melinda, Melinda; Rosmawinda, Rosmawinda; Basir, Nurlida; Rafiki, Aufa; Rahman, Imam Fathur
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 8 No 2 (2026): April
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v8i2.1523

Abstract

Down syndrome (trisomy 21) is a genetic disorder caused by an extra copy of chromosome 21, resulting in distinctive developmental facial characteristics and intellectual delays. Early detection is crucial to enable timely medical intervention. However, conventional diagnostic procedures still rely on clinical observation and genetic testing, which can be invasive and expensive. This study proposes a facial image–based classification system for detecting Down syndrome using a Convolutional Neural Network (CNN) approach. Seven CNN architectures were evaluated, namely EfficientNetB0, MobileNetV2, ResNet34, ShuffleNetV2, AlexNet, VGG19, and InceptionV3, under two training scenarios: with and without early stopping. The dataset consisted of 1,000 facial images of children with and without Down syndrome, split into training, validation, and test sets at 60:20:20. Face detection was performed using the Haar Cascade Classifier, followed by data augmentation techniques including rotation, zoom, translation, horizontal flipping, and Gaussian noise to improve model generalization and reduce overfitting. Experimental results show that the VGG19 architecture achieved the best performance, with an accuracy of 94.5%, precision of 91.59%, recall of 98%, and an F1-score of 94.69%. A one-way ANOVA test yielded an F-value of 0.003 and a p-value of 0.955 (> 0.05), indicating no statistically significant difference between models trained with and without early stopping. Grad-CAM visualization highlighted key facial regions, namely the eyes, nose, and mouth, as the primary contributors to classification, while analysis using 68 facial landmark points revealed distinctive morphological patterns associated with Down syndrome. The integration of CNN models, Grad-CAM visualization, and facial landmark analysis demonstrates a promising, interpretable, and non-invasive approach to supporting early Down syndrome screening using facial images
Comparative Evaluation of LSTM and Metaheuristic-Optimized Neural Networks for ECG Prediction under Limited Data Conditions Prenata, Giovanni Dimas; Ridho’i, Ahmad; Arshad, Mohd Rizal
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 8 No 2 (2026): April
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v8i2.1524

Abstract

This study presents a comparative evaluation of Deep Feedforward Neural Network (DFFNN) models optimized using single-stage metaheuristic approaches, including Genetic Algorithm (GA), Particle Swarm Optimization (PSO), and Grey Wolf Optimization (GWO), as well as a multi-stage hybrid optimization strategy (GA+GWO) for ECG-based emotion classification. The experimental dataset consists of ECG recordings collected from three elderly participants using a Sparkfun AD8232 sensor under controlled emotional stimuli, representing a limited-subject and small-data scenario. Feature extraction is conducted using Heart Rate Variability (HRV) parameters derived from both time domain (Mean RR, SDNN, RMSSD, Mean HR, and STD HR) and frequency domain (LF, HF, and LF/HF ratio). Experimental results from six repeated trials demonstrate that the multi-stage DFFNN+GA+GWO model achieves the best optimization performance, yielding the lowest Mean Squared Error (MSE) of 0.01599 and a consistent training accuracy of up to 85.71%. Compared with single-stage optimization methods, the hybrid approach exhibits improved convergence behavior and reduced performance variance, indicating enhanced optimization stability. However, test accuracy remains relatively limited (33.33%–50.00%), reflecting constrained generalization capability due to the small dataset and the absence of subject-wise or external validation. Further statistical analysis using confidence intervals and nonparametric testing confirms that the observed performance improvements are primarily associated with optimization stability rather than statistically significant gains in predictive generalization. Therefore, this study emphasizes the role of metaheuristic optimization in stabilizing neural network training under limited data conditions. The findings should be interpreted as a pilot feasibility study, and future work is required to validate the proposed approach using larger, more diverse datasets and more rigorous validation strategies.
Processing and Analysis of Electrogastrogram (EGG) Signals to Evaluate Stressor and Motion Sickness Conditions in Virtual Reality Environments Hanafi, MHD.Hanafi; Sahroni, Alvin; Kusumadewi, Sri; Setiawan, Hendra; Firdaus, Firdaus
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 8 No 2 (2026): April
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v8i2.1531

Abstract

Virtual Reality (VR) technology has rapidly evolved and is widely utilized in healthcare, education, and entertainment. However, its use often induces motion sickness and stressor, which may reduce user comfort and performance. This study aims to determine whether VR exposure triggers such conditions, evaluate them using electrogastrogram (EGG) signals, and identify the most effective EGG features as physiological indicators. EGG signals from nine healthy male subjects were recorded using two channels under three experimental conditions (pre, stimulation, and post) during both pre-prandial and post-prandial phases. Frequency-domain analysis was performed using the Fast Fourier Transform (FFT) within the 0.03-0.07 Hz range to extract dominant frequency, dominant magnitude, mean frequency, average magnitude, and band power. Subjective evaluation was conducted using a five-point Likert scale. The results indicate that VR exposure induced motion sickness and stressor, with Likert scores ranging from 3 to 5. Three normalized magnitude features of the EGG signal on channel 0 during the pre-prandial stimulation phase exhibited significant positive correlations with motion sickness: dominant magnitude (r = 0.841, p = 0.005), average magnitude (r = 0.742, p = 0.022), and band power (r = 0.788, p = 0.012). These features also showed significant correlations with stressor levels: dominant magnitude (r = 0.895, p = 0.001), average magnitude (r = 0.780, p = 0.013), and band power (r = 0.821, p = 0.007). These findings confirm that VR exposure can induce motion sickness and act as a physiological stressor, with three EGG magnitude features serving as reliable physiological indicators. The lower corpus extending to the antrum and pylorus was identified as the most representative electrode placement area, and the pre-prandial phase was found to be more susceptible to VR-induced disturbances than the post-prandial phase
Brain Tumor Detection from MRI Images Using an Ensemble-Based Machine Learning Framework Bhatt, Arpit; Patel, Chirag; Bhatt , Nikita
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 8 No 2 (2026): April
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v8i2.1559

Abstract

The early detection of brain tumors from MRI images is critical for effective treatment planning. Still, manual analysis of these images is time-consuming and prone to inter-observer variability. This paper suggests a machine learning framework for automated brain tumor detection that uses an ensemble of classifiers to make it more accurate and reliable. The suggested framework combines Support Vector Machine (SVM), Random Forest (RF), and k-Nearest Neighbor (k-NN) classifiers. It uses a majority voting method at the decision level to make final predictions. The model uses both handcrafted texture features from the Gray-Level Co-occurrence Matrix (GLCM) and deep features from a pre-trained ResNet50 model to make it more effective at distinguishing between things. The framework was tested using three publicly available MRI datasets: Figshare, SARTAJ, and BR35H. These datasets had a total of 9,826 images. The ensemble model got 95.2% correct, with 94.6%, 94.1%, and 94.3% for precision, recall, and F1-score, respectively. This was better than any of the individual classifiers. The area under the curve (AUC) was also 0.97, which means it was very good at telling the difference between things. The experimental results demonstrate that the ensemble approach not only delivers a robust solution but also ensures computational efficiency, rendering it appropriate for clinical applications. This framework shows that it could be used in computer-aided diagnosis systems to detect brain tumors in real time and perform better across different datasets. The suggested ensemble-based framework is a scalable, efficient, and reliable way to use MRI to find brain tumors. It gets around the problems that single classifiers have in medical imaging.
LRSE-LCC: A Lightweight Residual CNN with Squeeze-and-Excitation Attention for Lung Cancer Classification from CT Image Rana, Dhaval J.; Rana, Keyur
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 8 No 2 (2026): April
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v8i2.1574

Abstract

Lung cancer is still a major cause of cancer deaths globally, and there is a need for accurate and early diagnostic systems. Although deep learning models have shown encouraging results in classifying lung cancer from CT scans, most are computationally complex. This paper proposes the design of a lightweight and accurate deep learning model for multi-class lung cancer classification from CT scans. A new model called Lightweight Residual CNN with Squeeze-and-Excitation Lung Cancer Classification (LRSE-LCC) is proposed. The model combines lightweight residual learning for stable gradient flow and channel attention for improved feature representation. Dual global pooling is used by combining Global Average Pooling and Global Max Pooling to enable complementary feature extraction. In addition, a balanced batch training method is used to handle class imbalance. The proposed model was tested on the IQ-OTH/NCCD lung CT image dataset, which includes normal, benign, and malignant images. Image resizing and normalization were done before training. The proposed LRSE-LCC model achieved a test accuracy of 98.19%. Sensitivity was 100.00%, indicating strong ability to detect malignant images. The model achieved a specificity of 99.04%, reducing false-positive predictions. The macro-averaged AUC was 99.90%. The AUC values for all classes exceeded 99.80%, indicating outstanding classification performance. The macro F1-score was 96.42%. The value of the Cohen’s kappa coefficient was 96.88%, which ensured that the agreement was not by chance. The overall error rate was limited to 1.81%. In conclusion, the proposed LRSE-LCC model has both high classification accuracy and efficiency. The combination of residual learning, channel attention, and dual pooling helps to greatly improve the accuracy of multi-class diagnosis. The proposed lightweight model has great potential for application in real-world computer-aided lung cancer diagnosis systems.