cover
Contact Name
Triwiyanto
Contact Email
triwiyanto123@gmail.com
Phone
+628155126883
Journal Mail Official
editorial.jeeemi@gmail.com
Editorial Address
Department of Electromedical Engineering, Poltekkes Kemenkes Surabaya Jl. Pucang Jajar Timur No. 10, Surabaya, Indonesia
Location
Kota surabaya,
Jawa timur
INDONESIA
Journal of Electronics, Electromedical Engineering, and Medical Informatics
ISSN : -     EISSN : 26568632     DOI : https://doi.org/10.35882/jeeemi
The Journal of Electronics, Electromedical Engineering, and Medical Informatics (JEEEMI) is a peer-reviewed open-access journal. The journal invites scientists and engineers throughout the world to exchange and disseminate theoretical and practice-oriented topics which covers three (3) majors areas of research that includes 1) Electronics, 2) Biomedical Engineering, and 3)Medical Informatics (emphasize on hardware and software design). Submitted papers must be written in English for an initial review stage by editors and further review process by a minimum of two reviewers.
Articles 287 Documents
Precise Lung Cancer Prediction using ResNet – 50 Deep Neural Network Architecture Lakide, Vedavrath; Ganesan, V.
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 1 (2025): January
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i1.518

Abstract

The fact that lung cancer continues to be the leading cause of cancer-related death around the world emphasizes how important it is to improve diagnostic methods. Using computed tomography (CT) images and deep learning techniques, the goal of this study is to improve the classification of lung cancer. EfficientNetB1 and Inception V3 are two well-known convolutional neural network (CNN) architectures that we compare the performance of our modified ResNet50 architecture against in order to determine how well it performs in the classification of lung nodules. Analyzing the effects of various preprocessing and hyperparameter optimization methods on model performance is one of our research objectives. Another is to determine how well these models improve diagnostic accuracy. An extensive collection of CT images with annotated lung nodule classifications make up the utilized dataset. To ensure accurate model training and improve image quality, a rigorous preprocessing pipeline is used. Using the Keras Sequential framework, the models are trained with optimal dropout rates and L2 regularization to prevent overfitting. Metrics like accuracy, loss, and confusion matrices are used to evaluate model performance. A comprehensive evaluation of the model's sensitivity and specificity across various thresholds is also provided by means of the Free-Response Receiver Operating Characteristic (FROC) curve and Area Under the Curve (AUC) values. The adjusted ResNet50 model showed prevalent order exactness, accomplishing a precision of 98.1% and an AUC of 0.97, in this way beating different models in the review. EfficientNetB1 had an accuracy of 96.4 percent and an AUC of 0.94, while Inception V3 had an accuracy of 95.8 percent and an AUC of 0.93, as a comparison. Based on these findings, it appears that the accuracy of lung cancer detection from CT images can be significantly improved by combining specialized preprocessing and training methods with advanced CNN architectures. With potential implications for clinical practice and future research directions, this study offers a promising strategy for increasing lung cancer diagnostic accuracy.
Ensemble learning based Convolutional Neural Network – Depth Fire for detecting COVID-19 in Chest X-Ray images Chandrika, G Naga; Chowdhury, Rini; Prashant Kumar; K, Sangamithrai; E, Glory; M D, Saranya
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 1 (2025): January
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i1.525

Abstract

The Unique Corona virus-caused COVID19 deadly disease has gave out a significant dispute to healthcare systems around the world. To stop the virus's transmission and lessen its negative effects on public health, it is crucial to recognise correctly and rapidly those who have COVID19. The application of artificial intelligence (AI) holds the capacity to increase the precision and effectiveness of COVID19 diagnosis. The purpose of the study is to build a reliable AI-based model capable correctly detect COVID19 cases from chest X-ray pictures. A dataset of 16,000 chest X-ray pictures, including COVID19 positive and negative instances, is employed in the investigation. Four convolutional neural network (CNN) the models that previously been trained are employed in the proposed model, and the output of each model is combined using an ensembling technique. The major objective of this project is to develop an accurate and reliable AI-based model to classify COVID19 cases from chest X-ray images. The individuality of this method comes in its capacity to employ data augmentation strategies to enhance model generalisation and prevent overfitting. The accuracy and dependability of the model are moreover advanced by utilising numerous pre-trained CNN models and ensembling methods. The suggested AI-based model's classification accuracy for the five classes (bacterial, COVID19 positive, negative, opacity, and viral), the three classes (COVID19 positive, negative, and healthy), and the two classes (COVID19 positive and negative) was 97.3%, 98.2%, 97.6%, and respectively. The projected model performs better in terms of sensitivity, accuracy and specificity than unconventional techniques that are previously in use. Significant ability may be guided in the suggested AI-based model's ability to recognize COVID19 cases quickly and effectively from X-rays of the chest. This approach can help radiology physicians analyse affected role quickly and correctly, improving patient outcomes and lessening the strain on healthcare systems. To ensure the precision of the diagnosis, it is vital to mention that the model's decisions should be made in consultation with a licenced medical expert.
A Novel Image Feature Extraction Based Machine Learning approach for Disease Detection from Chest X-Ray Images Vangipuram, Sravan Kiran; Appusamy, Rajesh
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 1 (2025): January
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i1.529

Abstract

The limitation of feature selection is the biggest challenge for machine learning classifiers in disease classification. This research proposes a novel feature extraction method to extract representative features from medical images, combining extracted features with original image pixel features. Additionally, we propose a new method that uses data values from Andrews's curve function to transform chest x-ray images into spectrograms. The spectrogram images are believed to aid in distinguishing near-similar medical images, such as COVID and pneumonia. The study aims to build an efficient machine learning system that applies the proposed feature extraction method and utilizes spectrogram images for distinguishing near-similar medical images. For experimental analysis, we have used the award winning Kaggle Chest Radiography image dataset. The test results show that among all machine learning classifiers, the logistic regression classifier could correctly distinguish COVID and pneumonia images with a 97.18% test accuracy, a 98.34% detection rate, a 97.8% precision rate, and an AUC value of 0.99 on the test dataset. The machine learning model has learned to distinguish between medical images that appear similar using features found through the proposed feature extraction and spectrogram images. The results also proved that the proposed approach using XGBoost has outperformed state-of-the-art models in recent research studies when (i) binary classification is performed using COVID-19 and Normal Chest x-ray images and (ii) multiclass classification is performed using Normal, COVID and Pneumonia Chest x-ray images.
Predicting Evolutionary Importance of Amino Acids through Mutation of Codons Using K-means Clustering Hussain, Nasrin Irshad; Boruah, Kuntala; Akhtar, Adil
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 1 (2025): January
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i1.538

Abstract

Mutation is a random biological event that may cause permanent (long term) change in living organism induced by several structural or composition alteration in the proteins. During mutation genetic materials such as nucleotide bases in the codons is changed which potentially contributed to the alteration in the codons and consequently the amino acid that new codon encodes. In this study mutation at different nucleotide base positions within the codons is analyzed to understand the evolutionary importance of amino acids. By creating hypothetical mutations at the first, second and third positions of all 61 codons (excluding stop codons) and using K-means clustering, we categorized the resulting amino acids. Our analysis reveals that mutations at the second base position generate the highest number of distinct amino acids, indicating greater evolutionary significance compared to first and third position mutations. We applied the proposed framework on COVID-2 SARS-CoV-2 (Homo sapiens) amino acid sequence and are able to deduce several significant findings about the mutation patterns. The clustering analysis revealed that amino acids such as Glycine (G), Alanine (A), Proline (P), Valine (V) and one polar amino acid are recurrent in the combined centroids of the clusters. These amino acids, predominantly hydrophobic, play a crucial role in stabilizing protein structures. This framework not only gives the insight understanding of mutation patterns and their biological significance but also underscores the importance of specific amino acids in the evolutionary process.
A Novel Encoder Decoder Architecture with Vision Transformer for Medical Image Segmentation Saroj Bala; Arora, Kumud; R, Jeevitha; Chowdhury, Rini; Kumar, Prashant; Nageswari, C.Shobana
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 1 (2025): January
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i1.571

Abstract

Brain tumor image segmentation is one of the most critical tasks in medical imaging for diagnosis, treatment planning, and prognosis. Traditional methods for brain tumor image segmentation are mostly based on Convolution Neural Network (CNN), which have been proved very powerful but still have limitations to effectively capture long-range dependencies and complex spatial hierarchies in MRI images. Variability in the shape, size, and location of tumors may affect the performance and may get stuck into suboptimal outcomes. In these regards, new encoder-decoder architecture with the VisionTranscoder(ViT) is proposed, to enhance brain tumor detection and classification. The proposed VisionTranscoder exploits a transformer's ability in modeling global context through self-attention mechanisms, providing more inclusive interpretation of the intricate patterns in medical images and classification by capturing both local and global features. The proposed VisionTranscoder maintains the Vision Transformer in its encoder for processing images as sequences of patches to capture global dependencies often outside the view of traditional CNNs. Then the segmentation map is rebuilt at a high level of fidelity with the decoder through upsampling and skips connections to maintain detailed spatial information. The risk of overfitting is hugely reduced by design and advanced regularization techniques with extensive data augmentation. The dataset contains 7,023 human brain MRI images, all of which are in four different classes: glioma, meningioma, no tumor, and pituitary. Images from the 'no tumor' class, indicating an MRI scan without any detectable tumor, were taken from the Br35H dataset . The results show the efficiency of VisionTranscoder over a wide set of brain MRI scans, producing an accuracy of 98.5% with a loss of 0.05. This performance underlines the ability of it to accurately segment and classify a brain tumor without overfitting.
Intelligent Tuberculosis Detection System with Continuous Learning on X-ray Images A'yuni, Qurrata; Nasaruddin, Nasaruddin; Irhamsyah, Muhammad; Azhary, Mulkan; Roslidar, Roslidar
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 1 (2025): January
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i1.572

Abstract

Tuberculosis (TB) has become a global health threat with millions of cases each year. Therefore, rapid and accurate detection is needed to control its spread. The application of artificial intelligence, especially Deep Learning (DL), has shown great potential in improving the accuracy of TB detection through DL-based X-ray image analysis. Although many studies have developed X-ray image classification models, very few have integrated them into web or mobile platforms. In addition, the models integrated into these platforms generally do not apply continuous learning methods so that model performance cannot be updated. Thus, it is necessary to build an intelligent system based on a web application that integrates the ResNet-101 model for TB detection in X-ray images. This system utilizes continuous learning methods, allowing the model to automatically update itself with new data, thereby improving detection performance over time. The results showed that before continuous learning, the model successfully classified all TB images correctly, but was only able to classify two normal images correctly, resulting in an accuracy of 62.5%. After manual continuous learning, the model showed an increase in accuracy to 71.4%, with better ability to recognize normal images, although there was a slight decrease in performance in detecting TB.
Sleep Apnea Detection Model Using Time Window and One-Dimensional Convolutional Neural Network on Single-Lead Electrocardiogram Pratama, Fadil; Wiharto, Wiharto; Salamah, Umi
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 1 (2025): January
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i1.573

Abstract

Sleep apnea is an important disorder that involves frequent disruptions in breathing during sleep, which can result in numerous serious health issues, such as cognitive deterioration, cardiovascular illness, and heightened mortality risk. This study introduces a detailed model designed for the detection of sleep apnea using single-lead electrocardiogram signals, providing an accurate detection method. We can use single-lead ECG signals to get ECG-Derived Respiration (EDR). EDR combines important respiratory signals with RR intervals to help find sleep apnea more accurately. We structure the research process into seven systematic stages, ensuring a comprehensive approach to the issue. The process commences with the acquisition of data from the "Apnea-ECG Database" accessible on the PhysioNet platform, which underpins the ensuing analysis. Subsequent to data collection, we execute a sequence of preprocessing procedures, including segmentation, filtering, and R-peak detection, to enhance the ECG data for analysis. After that, we do feature extraction, which gives us 12 unique features from the RR interval and 6 features from the R-peak amplitude, which are both necessary for the model to work. The research subsequently utilizes feature engineering, implementing a Time Window methodology to encapsulate the temporal dynamics of the data. To ensure the results are robust, we conduct model evaluation using stratified K-fold cross-validation with five folds. The modeling technique employs a 1D Convolutional Neural Network (1D-CNN) utilizing the Adam optimizer. Ultimately, the performance assessment shows an accuracy score reaching 89.87%, sensitivity at 86.16%, specificity at 92.30%, and an AUC score of 0.96, attained with a Time Window size of 15. This model signifies a substantial improvement in performance relative to previous studies and serves as a feasible option for the detection of sleep apnea
Advanced Bi-CNN for Detection of Knee Osteoarthritis using Joint Space Narrowing Analysis Kadu, Rahul; Pawar, Sunil
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 1 (2025): January
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i1.574

Abstract

The prevalence of knee osteoarthritis is significantly increasing due to the expanding global ageing population and the rising incidence of obesity. Many researchers use artificial intelligence analytics for knee osteoarthritis (KOA) prediction and treatment. The majority of research is restricted to particular patient groups or attributes, such MRI, X-ray, or questionnaire groups. In our research we propose the use of advanced ortho bilinear convolutional neural network (CNN) classifier to enhance the precision of knee osteoarthritis detection through joint space narrowing analysis. Recognizing the critical need for accurate and early diagnosis in osteoarthritis, this study introduces a sophisticated approach leveraging the unique capabilities of bilinear CNNs (BiCNN). By integrating bilinear interactions within the CNN architecture, the model aims to capture convoluted spatial and channel-wise dependencies in knee radiographic images, thereby improving the capability to understated changes in osteoarthritis progression, particularly within the joint space. The proposed bilinear CNN classifier technique promises to refine the precision of knee osteoarthritis detection, providing clinicians with a powerful tool for identifying joint space narrowing with improved accuracy. Based on the experiment over unseen images, the recall was 93.04%, precision 96.33%, F1 Score was 95.46% and overall accuracy was 94.28%. Results show the superiority of the proposed method compared to other state-of-the-art methods. Hence the proposed method can be used for KOA diagnosis and KL grading in real time scenarios.
Neonatal Jaundice Severity Detection from Skin Images using Deep Transfer Learning Techniques ALdabbagh, Banan; Aziz, Mazin H.
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 1 (2025): January
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i1.576

Abstract

Neonates in the initial weeks postpartum frequently experience jaundice, a prevalent medical condition characterized by the yellow discoloration of the sclera and integumentary surfaces. This phenomenon transpires as a result of the elevation of bilirubin concentrations within the circulatory system. When bilirubin levels reach critical thresholds, they present a considerable risk for severe complications, including neurological impairment, which represents one of the gravest outcomes that may ensue if the condition is not addressed with due diligence. This study investigates a non-invasive method for assessing jaundice severity in full-term infants from 1 to 29 days, focusing on infants in Mosul city. A dataset of 344 images was collected using an iPhone 12 Pro Max (9MP camera) at Ibn Al-Atheer Hospital, capturing various skin tones and lighting conditions to ensure accurate analysis. Advanced computer vision techniques were used to classify jaundice severity into three and four categories based on skin images. Pre-trained deep transfer learning models, namely VGG16 and ResNet50, were utilized for training, with the fully connected layer removed and a suitable classifier designed for each model. VGG16 achieved 91.71% accuracy for the three-category classification, while ResNet50 reached 95.98%. For the four-category classification, accuracies of 94.92% and 94.66% were achieved, respectively. These high accuracy levels suggest that non-invasive, image-based assessments can reduce the need for repeated blood tests. This research highlights the potential of using smartphone-based methods for jaundice screening in neonatal care, providing a reliable, accessible tool to reduce strain on medical facilities and improve early detection.
Cybersentinel: The Cyberbullying Detection Application Based on Machine Learning and VADER Lexicon with GridSearchCV Optimization Ernawati, Siti; Frieyadie, Frieyadie; Yulia, Eka Rini
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 6 No 4 (2024): October
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v6i4.580

Abstract

Cyberbullying is becoming an increasingly troubling issue in today's digital age, with serious impacts on the well-being of individuals and society as a whole. With the number of social media users continuously rising, there is an urgent need to develop effective solutions for detecting cyberbullying. This urgency negatively affects the well-being of individuals, especially children and adolescents. The Big Data era also brings many new challenges, including the ability of organizations to manage, process, and extract value from available data to generate useful information. The aim of this research is to develop Cybersentinel, a cyberbullying detection application that combines Machine Learning and VADER Lexicon approaches to improve classification accuracy. It involves comparing several Machine Learning algorithms optimized using the GridSearchCV technique to find the best combination of parameters. The dataset used consists of social media comments labeled as bullying and non-bullying. The successfully developed model uses the Support Vector Machnine algorithm, achieving a best accuracy of 98.83%. The system is developed using Python with the Streamlit framework. This application development follows the Design Science Research (DSR) approach, which integrates principles, practices, and procedures to facilitate problem-solving and support the design and creation of applications. Testing is conducted using blackbox testing. The results show that parameter optimization using GridSearchCV can significantly enhance model performance, and applying the DSR method allows for the development of Cybersentinel tailored to specific needs. Thus, Cybersentinel provides an effective solution for detecting cyberbullying and contributes to improving the safety of social media users.