Claim Missing Document
Check
Articles

Found 6 Documents
Search
Journal : Journal of Robotics and Control (JRC)

Effectiveness of CNN Architectures and SMOTE to Overcome Imbalanced X-Ray Data in Childhood Pneumonia Detection Pamungkas, Yuri; Ramadani, Muhammad Rifqi Nur; Njoto, Edwin Nugroho
Journal of Robotics and Control (JRC) Vol 5, No 3 (2024)
Publisher : Universitas Muhammadiyah Yogyakarta

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.18196/jrc.v5i3.21494

Abstract

Pneumonia is a disease that causes high mortality worldwide in children and adults. Pneumonia is caused by swelling of the lungs, and to ensure that the lungs are swollen, a chest X-ray can be done. The doctor will then analyze the X-ray results. However, doctors sometimes have difficulty confirming pneumonia from the results of chest X-ray observations. Therefore, we propose the combination of SMOTE and several CNN architectures be implemented in a chest X-ray image-based pneumonia detection system to help the process of diagnosing pneumonia quickly and accurately. The chest X-ray data used in this study were obtained from the Kermany dataset (5216 images). Several stages of pre-processing (grayscaling and normalization) and data augmentation (shifting, zooming, and adjusting the brightness) are carried out before deep learning is carried out. It ensures that the input data for deep learning is not mixed with noise and is according to needs. Then, the output data from the augmentation results are used as input for several CNN deep learning architectures. The augmented data will also utilize SMOTE to overcome data class disparities before entering the CNN algorithm. Based on the test results, the VGG16 architecture shows the best level of performance compared to other architectures. In system testing using SMOTE+CNN Architectures (VGG16, VGG19, Xception, Inception-ResNet v2, and DenseNet 201), the optimum accuracy level reached 93.75%, 89.10%, 91.67%, 86.54% and 91.99% respectively. SMOTE provides a performance increase of up to 4% for all CNN architectures used in predicting pneumonia.
Work Fatigue Detection of Search and Rescue Officers Based on Hjorth EEG Parameters Pamungkas, Yuri; Indriani, Ratri Dwi; Crisnapati, Padma Nyoman; Thwe, Yamin
Journal of Robotics and Control (JRC) Vol 5, No 6 (2024)
Publisher : Universitas Muhammadiyah Yogyakarta

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.18196/jrc.v5i6.23511

Abstract

Work fatigue can cause a decrease in cognitive function, such as decreased thinking ability, concentration, and memory. A tired brain cannot work optimally, interfering with a person's ability to perform tasks that require complex thinking. In general, to evaluate work fatigue in a person, self-assessment activities using the Perceived Stress Scale (PSS) are the method most often used by researchers or practitioners. However, this method is prone to bias because sometimes people try to hide or exaggerate their tiredness at work. Therefore, we propose to evaluate people's work fatigue based on their EEG data in this study. A total of 25 participants from SAR officers recorded their EEG data in relaxed conditions (pre-SAR operations) and fatigue conditions (post-SAR operations). Recording was performed on the brain's left (fp1 t7) and right (fp2 t8) hemispheres. The EEG data is then processed by filtering, artifact removal using ICA method, signal decomposition into several frequency bands, and Hjorth feature extraction (activity, mobility, and complexity). The main advantage of Hjorth parameters compared to other EEG features is its ability to provide rich information about the complexity and mobility of the EEG signal in a relatively simple and fast way. Based on the results of activity feature extraction, feature values will tend to increase during the post-SAR operation conditions compared to the pre-operation SAR conditions. In addition, the results of the classification of pre-and post-operative SAR conditions using Bagged Tree algorithm (10-fold cross validation) show that the highest accuracy can be obtained is 94.8%.
Work Fatigue Detection of Search and Rescue Officers Based on Hjorth EEG Parameters Pamungkas, Yuri; Indriani, Ratri Dwi; Crisnapati, Padma Nyoman; Thwe, Yamin
Journal of Robotics and Control (JRC) Vol. 5 No. 6 (2024)
Publisher : Universitas Muhammadiyah Yogyakarta

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.18196/jrc.v5i6.23511

Abstract

Work fatigue can cause a decrease in cognitive function, such as decreased thinking ability, concentration, and memory. A tired brain cannot work optimally, interfering with a person's ability to perform tasks that require complex thinking. In general, to evaluate work fatigue in a person, self-assessment activities using the Perceived Stress Scale (PSS) are the method most often used by researchers or practitioners. However, this method is prone to bias because sometimes people try to hide or exaggerate their tiredness at work. Therefore, we propose to evaluate people's work fatigue based on their EEG data in this study. A total of 25 participants from SAR officers recorded their EEG data in relaxed conditions (pre-SAR operations) and fatigue conditions (post-SAR operations). Recording was performed on the brain's left (fp1 & t7) and right (fp2 & t8) hemispheres. The EEG data is then processed by filtering, artifact removal using ICA method, signal decomposition into several frequency bands, and Hjorth feature extraction (activity, mobility, and complexity). The main advantage of Hjorth parameters compared to other EEG features is its ability to provide rich information about the complexity and mobility of the EEG signal in a relatively simple and fast way. Based on the results of activity feature extraction, feature values will tend to increase during the post-SAR operation conditions compared to the pre-operation SAR conditions. In addition, the results of the classification of pre-and post-operative SAR conditions using Bagged Tree algorithm (10-fold cross validation) show that the highest accuracy can be obtained is 94.8%.
Enhancing Diabetic Retinopathy Classification in Fundus Images using CNN Architectures and Oversampling Technique Pamungkas, Yuri; Triandini, Evi; Yunanto, Wawan; Thwe, Yamin
Journal of Robotics and Control (JRC) Vol. 6 No. 1 (2025)
Publisher : Universitas Muhammadiyah Yogyakarta

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.18196/jrc.v6i1.25331

Abstract

Diabetic Retinopathy (DR) is a severe complication of diabetes mellitus that affects the retinal blood vessels and is a leading cause of blindness in productive-age individuals. The global increase in diabetes prevalence requires an effective DR classification system for early detection. This study aims to develop a DR classification system using several CNN architectures, such as EfficientNet-B4, ResNet-50, DenseNet-201, Xception, and Inception-ResNet-v2, with the application of the SMOTE oversampling technique to address data class imbalance. The dataset used is APTOS 2019, which has an unbalanced class distribution. Two scenarios were tested, the first without data balancing and the second with SMOTE implementation. The test results show that in the first scenario, Xception achieved the highest accuracy at 80.61%, but model performance was still limited due to majority class dominance. The application of SMOTE in the second scenario significantly improved model accuracy, with EfficientNet-B4 achieving the highest accuracy of 97.78%. Additionally, precision and recall increased dramatically in the second scenario, demonstrating SMOTE's effectiveness in enhancing the model's ability to detect minority classes and reduce prediction errors. DenseNet-201 achieved the highest precision at 99.28%, while Inception-ResNet-v2 recorded the highest recall at 98.57%. Overall, this study proves that the SMOTE method effectively addresses class imbalance in the fundus dataset and significantly improves CNN model performance. Although data balancing can help improve model quality by dealing with data imbalances, it comes at a higher computational cost. Using data balancing techniques with SMOTE significantly increased the iteration time per round on all tested CNN architectures.
A Comprehensive Review of EEGLAB for EEG Signal Processing: Prospects and Limitations Pamungkas, Yuri; Rangkuti, Rahmah Yasinta; Triandini, Evi; Nakkliang, Kanittha; Yunanto, Wawan; Uda, Muhammad Nur Afnan; Hashim, Uda
Journal of Robotics and Control (JRC) Vol. 6 No. 4 (2025)
Publisher : Universitas Muhammadiyah Yogyakarta

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.18196/jrc.v6i4.27084

Abstract

EEGLAB is a MATLAB-based software that is widely used for EEG signal processing due to its complete features, analysis flexibility, and active open-source community. This review aims to evaluate the use of EEGLAB based on 55 research articles published between 2020 and 2024, and analyze its prospects and limitations in EEG processing. The articles were obtained from reputable databases, namely ScienceDirect, IEEE Xplore, SpringerLink, PubMed, Taylor & Francis, and Emerald Insight, and have gone through a strict study selection stage based on eligibility criteria, topic relevance, and methodological quality. The review results show that EEGLAB is widely used for EEG data preprocessing such as filtering, ICA, artifact removal, and advanced analysis such as ERP, ERSP, brain connectivity, and activity source estimation. EEGLAB has bright prospects in the development of neuroinformatics technology, machine learning integration, multimodal analysis, and large-scale EEG analysis which is increasingly needed. However, EEGLAB still has significant limitations, including a high reliance on manual inspection in preprocessing, low spatial resolution in source modeling, limited multimodal integration, low computational efficiency for large-scale EEG data, and a high learning curve for new users. To overcome these limitations, future research is recommended to focus on developing more accurate automation methods, increasing the spatial resolution of source analysis, more efficient multimodal integration, high computational support, and implementing open science with a standardized EEG data format. This review provides a novel contribution by systematically mapping EEGLAB’s usage trends and pinpointing critical technical and methodological gaps that must be addressed for broader neurotechnology adoption.
The Emerging Role of Artificial Intelligence in Identifying Epileptogenic Zone: A Systematic Literature Review Pamungkas, Yuri; Radiansyah, Riva Satya; Pratasik, Stralen; Krisnanda, Made; Derek, Natan
Journal of Robotics and Control (JRC) Vol. 6 No. 5 (2025)
Publisher : Universitas Muhammadiyah Yogyakarta

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.18196/jrc.v6i5.27281

Abstract

Identifying epileptogenic zones (EZs) is a crucial step in the pre-surgical evaluation of drug-resistant epilepsy patients. Conventional methods, including EEG/SEEG visual inspection and neurofunctional imaging, often face challenges in accuracy, reproducibility, and subjectivity. The rapid development of artificial intelligence (AI) technologies in signal processing and neuroscience has enabled their growing use in detecting epileptogenic zones. This systematic review aims to explore recent developments in AI applications for localizing epileptogenic zones, focusing on algorithm types, dataset characteristics, and performance outcomes. A comprehensive literature search was conducted in 2025 across databases such as ScienceDirect, Springer Nature, and IEEE Xplore using relevant keyword combinations. The study selection followed PRISMA guidelines, resulting in 34 scientific articles published between 2020 and 2024. Extracted data included AI methods, algorithm types, dataset modalities, and performance metrics (accuracy, AUC, sensitivity, and F1-score). Results showed that deep learning was the most used approach (44%), followed by machine learning (35%), multi-methods (18%), and knowledge-based systems (3%). CNN and ANN were the most commonly applied algorithms, particularly in scalp EEG and SEEG-based studies. Datasets ranged from public sources (Bonn, CHB-MIT) to high-resolution clinical SEEG recordings. Multimodal and hybrid models demonstrated superior performance, with several studies achieving accuracy rates above 98%. This review confirms that AI (especially deep learning with SEEG and multimodal integration) has strong potential to improve the precision, efficiency, and scalability of EZ detection. To facilitate clinical adoption, future research should focus on standardizing data pipelines, validating AI models in real-world settings, and developing explainable, ethically responsible AI systems.