Claim Missing Document
Check
Articles

Found 21 Documents
Search

Hyperparameter Tuning of EfficientNet Method for Optimization of Malaria Detection System Based on Red Blood Cell Image Pamungkas, Yuri; Eljatin, Dwinka Syafira
Jurnal Sisfokom (Sistem Informasi dan Komputer) Vol. 13 No. 3 (2024): NOVEMBER
Publisher : ISB Atma Luhur

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.32736/sisfokom.v13i3.2257

Abstract

Nowadays, malaria has become an infectious disease with a high mortality rate. One way to detect malaria is through microscopic examination of blood preparations, which is done by experts and often takes a long time. With the development of deep learning technology, the observation of blood cell images infected with malaria can be more easily done. Therefore, this study proposes a red blood cell image-based malaria detection system using the EfficientNet method with hyperparameter tuning. There are three parameters which are learning rate, activation function, and optimiser. The learning rate used is 0.01 and 0.001, while the activation functions used are ReLU and Tanh. In addition, the optimisers used include Adam, SGD, and RMSProp. In the implementation, the cell image dataset from the NIH repository was pre-processed such as resizing, rotating, filtering, and data augmentation. Then the data is trained and tested on several EfficientNet models (B0, B1, B3, B5, and B7) and their performance values are compared. Based on the test results, EfficientNet-B5 and B7 models showed the best performance compared to other EfficientNet models. The most optimal system test results are when the EfficientNet B5 model is used with a learning rate of 0.001, ReLU activation function, and Adam optimiser, with values of 97.69% (accuracy), 98.36% (precision), and 97.03% (recall). This research has proven that proper model selection and hyperparameter tuning can maximise the performance of cell image-based malaria detection system. The development of this EfficientNet-based diagnostic method is more sensitive and specific in malaria detection using RBCs.
Evaluating the Effectiveness of Alzheimer’s Detection Using GANs and Deep Convolutional Neural Networks (DCNNs) Pamungkas, Yuri; Syaifudin, Achmad; Crisnapati, Padma Nyoman; Hashim, Uda
International Journal of Robotics and Control Systems Vol 5, No 2 (2025)
Publisher : Association for Scientific Computing Electronics and Engineering (ASCEE)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.31763/ijrcs.v5i2.1855

Abstract

Alzheimer’s is a gradually worsening condition that damages the brain, making timely and precise diagnosis essential for better patient care and outcomes. However, existing detection methods using DCNNs are often hampered by the problem of class imbalance in datasets, particularly OASIS and ADNI, where some classes are underrepresented. This study proposes a novel approach integrating GANs with DCNNs to tackle class imbalance by creating synthetic samples for underrepresented categories. The primary focus of this research is demonstrating that using GANs for data augmentation can significantly strengthen DCNNs performance in Alzheimer's detection by balancing the data distribution across all classes. The proposed method involves training DCNNs with both original and GAN-generated data, with data partitioning of 80:10:10 for training/ validation/ testing. GANs are applied to generate new samples for underrepresented classes within the OASIS and ADNI datasets, ensuring balanced datasets for model training. The experimental results show that using GANs improves classification performance significantly. In the case of the OASIS dataset, the mean accuracy and F1 Score rose from 99.64% and 95.07% (without GANs) to 99.98% and 99.96% (with GANs). For the ADNI dataset, the average accuracy and F1 Score improved from 96.21% and 93.01% to 99.51% and 99.03% after applying GANs. Compared to existing methods, the proposed GANs + DCNNs model achieves higher accuracy and robustness in detecting various stages of Alzheimer's disease, particularly for minority classes. These findings confirm the effectiveness of GANs in improving DCNNs' performance for Alzheimer's detection, providing a promising framework for future diagnostic implementations.
Deep Learning Approach to Lung Cancer Detection Using the Hybrid VGG-GAN Architecture Pamungkas, Yuri; Kuswanto, Djoko; Syaifudin, Achmad; Triandini, Evi; Hapsari, Dian Puspita; Nakkliang, Kanittha; Uda, Muhammad Nur Afnan; Hashim, Uda
International Journal of Robotics and Control Systems Vol 5, No 3 (2025)
Publisher : Association for Scientific Computing Electronics and Engineering (ASCEE)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.31763/ijrcs.v5i3.1923

Abstract

Lung cancer ranks among the primary contributors to cancer-related deaths globally, highlighting the need for accurate and efficient detection methods to enable early diagnosis. However, deep learning models such as VGG16 and VGG19, commonly used for CT scan image classification, often face challenges related to class imbalance, resulting in classification bias and reduced sensitivity to minority classes. This study contributes by proposing an integration of the VGG architecture and Generative Adversarial Networks (GANs) to improve lung cancer classification performance through balanced and realistic synthetic data augmentation. The proposed approach was evaluated using two datasets: the IQ-OTH/NCCD Dataset, which classifies patients into Benign, Malignant, and Normal categories based on clinical condition, and the Lung Cancer CT Scan Dataset, annotated with histopathological labels: Adenocarcinoma, Squamous Cell Carcinoma, Large Cell Carcinoma, and Normal. The method involves initial training of the VGG model without augmentation, followed by GAN-based data generation to balance class distribution. The experimental results show that, prior to augmentation, the models achieved relatively high overall accuracy, but with poor performance on minority classes (marked by low precision and F1-scores and FPR exceeding 8% in certain cases). After augmentation with GAN, all performance metrics improved dramatically and consistently across all classes, achieving near-perfect precision, TPR, F1-score, and overall accuracy of 99.99%, and FPR sharply reduced to around 0.001%. In conclusion, the integration of GAN and VGG proved effective in overcoming data imbalance and enhancing model generalization, making it a promising solution for AI-based lung cancer diagnostic systems.
Exploring the Determinants of User Acceptance for the Digital Diary Application in Type 1 Diabetes Management: A Structural Equation Modeling Approach Triandini, Evi; Permana, Putu Adi Guna; Hanief, Shofwan; Kuswanto, Djoko; Pamungkas, Yuri; Perwitasari, Rayi Kurnia; Hisbiyah, Yuni; Rochmah, Nur; Faizi, Muhammad
Journal of Applied Data Sciences Vol 6, No 3: September 2025
Publisher : Bright Publisher

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.47738/jads.v6i3.767

Abstract

Effective management of Type 1 Diabetes (T1D), especially in children, requires continuous monitoring and care. Digital health applications have become vital in supporting routine T1D management, including insulin delivery, glucose monitoring, nutrition, and physical activity tracking. This study investigates factors influencing user acceptance of a digital diary app designed for children with T1D and their families. Using an extended Technology Acceptance Model incorporating Trust, Perceived Risk, Perceived Enjoyment, and Social Influence, a survey was conducted with 114 participants, including parents, physicians, and dietitians. Data were analyzed using Partial Least Squares Structural Equation Modeling. Findings indicate that perceived usefulness, trust, and social influence significantly affect users' attitudes and intentions to use the app, through the accepted hypothesis that considered path coefficients and p-values. Conversely, hypothesis that shows relation between perceived ease of use, enjoyment, and risk toward intention were rejected, showing unsignificant relations toward user intention to use. Furthermore, this study recommends prioritizing robust security features, fostering user trust, and engaging social networks to enhance digital health adoption in pediatric care. Future research should further explore the roles of perceived risk and enjoyment in sustaining long-term engagement
Cranioplasty Training Innovation Using Design Thinking: AugmentedReality and Interchangeability-Based Mannequin Prototype Kuswanto, Djoko; Alifah Putri, Athirah Hersyadea; Zulaikha, Ellya; Apriawan, Tedy; Pamungkas, Yuri; Triandini, Evi; Jafari, Nadya Paramitha; Chusak, Thassaporn
MATRIK : Jurnal Manajemen, Teknik Informatika dan Rekayasa Komputer Vol. 24 No. 3 (2025)
Publisher : Universitas Bumigora

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30812/matrik.v24i3.5055

Abstract

Cranioplasty, a surgical procedure to reconstruct the anatomical structure of the human skull, is commonlyperformed in Indonesia due to the malignancy of diseases, traffic accidents, and workplaceinjuries. If left untreated, this condition can lead to serious complications. Although cranioplasty isgenerally considered a relatively easy surgery, it has a fairly high postoperative complication rate ofaround 10.3%. The decreasing availability of cadavers for anatomical studies has significantly limitedtraining opportunities. Therefore, efficient and effective training tools are essential, especially whentraditional resources are insufficient to meet educational needs. Additionally, the training capabilitiesof commercially available mannequins or replicas used in medical institutions remain limited. Themain objective of this project was to develop a smart, modular cranioplasty training mannequin designedfor repeated use, incorporating Augmented Reality (AR) technology to visualize anatomicalstructures that cannot be physically replicated. Using a design thinking approach, data was collectedthrough interviews with neurosurgeons, neurosurgery residents, and cranioplasty specialists, as well asthrough a review of relevant literature. Usability testing of the developed prototype yielded promisingresults, with high ratings for ease of use (4.8), training effectiveness (4.5), anatomical realism (4.3),and material durability (4.5) on a 5-point Likert scale. These findings demonstrated strong user approvaland confirmed the model’s potential to support surgical skill development in a practical andreproducible manner. The resulting AR-integrated training mannequin offers an innovative, engaging,and durable solution to address current challenges in neurosurgical education, especially in resourceconstrainedsettings.
A Comprehensive Review of EEGLAB for EEG Signal Processing: Prospects and Limitations Pamungkas, Yuri; Rangkuti, Rahmah Yasinta; Triandini, Evi; Nakkliang, Kanittha; Yunanto, Wawan; Uda, Muhammad Nur Afnan; Hashim, Uda
Journal of Robotics and Control (JRC) Vol. 6 No. 4 (2025)
Publisher : Universitas Muhammadiyah Yogyakarta

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.18196/jrc.v6i4.27084

Abstract

EEGLAB is a MATLAB-based software that is widely used for EEG signal processing due to its complete features, analysis flexibility, and active open-source community. This review aims to evaluate the use of EEGLAB based on 55 research articles published between 2020 and 2024, and analyze its prospects and limitations in EEG processing. The articles were obtained from reputable databases, namely ScienceDirect, IEEE Xplore, SpringerLink, PubMed, Taylor & Francis, and Emerald Insight, and have gone through a strict study selection stage based on eligibility criteria, topic relevance, and methodological quality. The review results show that EEGLAB is widely used for EEG data preprocessing such as filtering, ICA, artifact removal, and advanced analysis such as ERP, ERSP, brain connectivity, and activity source estimation. EEGLAB has bright prospects in the development of neuroinformatics technology, machine learning integration, multimodal analysis, and large-scale EEG analysis which is increasingly needed. However, EEGLAB still has significant limitations, including a high reliance on manual inspection in preprocessing, low spatial resolution in source modeling, limited multimodal integration, low computational efficiency for large-scale EEG data, and a high learning curve for new users. To overcome these limitations, future research is recommended to focus on developing more accurate automation methods, increasing the spatial resolution of source analysis, more efficient multimodal integration, high computational support, and implementing open science with a standardized EEG data format. This review provides a novel contribution by systematically mapping EEGLAB’s usage trends and pinpointing critical technical and methodological gaps that must be addressed for broader neurotechnology adoption.
The Emerging Role of Artificial Intelligence in Identifying Epileptogenic Zone: A Systematic Literature Review Pamungkas, Yuri; Radiansyah, Riva Satya; Pratasik, Stralen; Krisnanda, Made; Derek, Natan
Journal of Robotics and Control (JRC) Vol. 6 No. 5 (2025)
Publisher : Universitas Muhammadiyah Yogyakarta

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.18196/jrc.v6i5.27281

Abstract

Identifying epileptogenic zones (EZs) is a crucial step in the pre-surgical evaluation of drug-resistant epilepsy patients. Conventional methods, including EEG/SEEG visual inspection and neurofunctional imaging, often face challenges in accuracy, reproducibility, and subjectivity. The rapid development of artificial intelligence (AI) technologies in signal processing and neuroscience has enabled their growing use in detecting epileptogenic zones. This systematic review aims to explore recent developments in AI applications for localizing epileptogenic zones, focusing on algorithm types, dataset characteristics, and performance outcomes. A comprehensive literature search was conducted in 2025 across databases such as ScienceDirect, Springer Nature, and IEEE Xplore using relevant keyword combinations. The study selection followed PRISMA guidelines, resulting in 34 scientific articles published between 2020 and 2024. Extracted data included AI methods, algorithm types, dataset modalities, and performance metrics (accuracy, AUC, sensitivity, and F1-score). Results showed that deep learning was the most used approach (44%), followed by machine learning (35%), multi-methods (18%), and knowledge-based systems (3%). CNN and ANN were the most commonly applied algorithms, particularly in scalp EEG and SEEG-based studies. Datasets ranged from public sources (Bonn, CHB-MIT) to high-resolution clinical SEEG recordings. Multimodal and hybrid models demonstrated superior performance, with several studies achieving accuracy rates above 98%. This review confirms that AI (especially deep learning with SEEG and multimodal integration) has strong potential to improve the precision, efficiency, and scalability of EZ detection. To facilitate clinical adoption, future research should focus on standardizing data pipelines, validating AI models in real-world settings, and developing explainable, ethically responsible AI systems.
Transforming EEG into Scalable Neurotechnology: Advances, Frontiers, and Future Directions Pamungkas, Yuri; Triandini, Evi; Forca, Adrian Jaleco; Sangsawang, Thosporn; Karim, Abdul
Buletin Ilmiah Sarjana Teknik Elektro Vol. 7 No. 3 (2025): September
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.12928/biste.v7i3.13824

Abstract

Electroencephalography (EEG) is a key neurotechnology that enables non-invasive, high-temporal resolution monitoring of brain activity. This review examines recent advancements in EEG-based neuroscience from 2021 to 2025, with a focus on applications in neurodegenerative disease diagnosis, cognitive assessment, emotion recognition, and brain-computer interface (BCI) development. Twenty peer-reviewed studies were selected using predefined inclusion criteria, emphasizing the use of machine learning on EEG data. Each study was assessed based on EEG settings, feature extraction, classification models, and outcomes. Emerging trends show increased adoption of advanced computational techniques such as deep learning, capsule networks, and explainable AI for tasks like seizure prediction and psychiatric classification. Applications have expanded to real-world domains including neuromarketing, emotion-aware architecture, and driver alertness systems. However, methodological inconsistencies (ranging from varied preprocessing protocols to inconsistent performance metrics) pose significant challenges to reproducibility and real-world deployment. Technical limitations such as inter-subject variability, low spatial resolution, and artifact contamination were found to negatively impact model accuracy and generalizability. Moreover, most studies lacked transparency regarding bias mitigation, dataset diversity, and ethical safeguards such as data privacy and model interpretability. Future EEG research must integrate multimodal data (e.g., EEG-fNIRS), embrace real-time edge processing, adopt federated learning frameworks, and prioritize personalized, explainable models. Greater emphasis on reproducibility and ethical standards is essential for the clinical translation of EEG-based technologies. This review highlights EEG’s expanding role in neuroscience and emphasizes the need for rigorous, ethically grounded innovation.
Transfer Learning Models for Precision Medicine: A Review of Current Applications Pamungkas, Yuri; Aung, Myo Min; Yulan, Gao; Uda, Muhammad Nur Afnan; Hashim, Uda
Buletin Ilmiah Sarjana Teknik Elektro Vol. 7 No. 3 (2025): September
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.12928/biste.v7i3.14286

Abstract

In recent years, Transfer Learning (TL) models have demonstrated significant promise in advancing precision medicine by enabling the application of machine learning techniques to medical data with limited labeled information. TL overcomes the challenge of acquiring large, labeled datasets, which is often a limitation in medical fields. By leveraging knowledge from pre-trained models, TL offers a solution to improve diagnostic accuracy and decision-making processes in various healthcare domains, including medical imaging, disease classification, and genomics. The research contribution of this review is to systematically examine the current applications of TL models in precision medicine, providing insights into how these models have been successfully implemented to improve patient outcomes across different medical specialties. In this review, studies sourced from the Scopus database, all published in 2024 and selected for their "open access" availability, were analyzed. The research methods involved using TL techniques like fine-tuning, feature-based learning, and model-based transfer learning on diverse datasets. The results of the studies demonstrated that TL models significantly enhanced the accuracy of medical diagnoses, particularly in areas such as brain tumor detection, diabetic retinopathy, and COVID-19 detection. Furthermore, these models facilitated the classification of rare diseases, offering valuable contributions to personalized medicine. In conclusion, Transfer Learning has the potential to revolutionize precision medicine by providing cost-effective and scalable solutions for improving diagnostic capabilities and treatment personalization. The continued development and integration of TL models in clinical practice promise to further enhance the quality of patient care.
Recent Advances in Artificial Intelligence for Dyslexia Detection: A Systematic Review Pamungkas, Yuri; Rangkuti, Rahmah Yasinta; Karim, Abdul; Sangsawang, Thosporn
International Journal of Robotics and Control Systems Vol 5, No 3 (2025)
Publisher : Association for Scientific Computing Electronics and Engineering (ASCEE)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.31763/ijrcs.v5i3.2057

Abstract

The prevalence of dyslexia, a common neurodevelopmental learning disorder, poses ongoing challenges for early detection and intervention. With the advancement of artificial intelligence (AI) technologies in the fields of healthcare and education, AI has emerged as a promising tool for supporting dyslexia screening and diagnosis. This systematic review aimed to identify recent developments in AI applications for dyslexia detection, focusing on the methods used, types of algorithms, datasets, and their performance outcomes. A comprehensive literature search was conducted in 2025 across databases including ScienceDirect, IEEE Xplore, and PubMed using a combination of relevant MeSH terms. The article selection process followed the PRISMA guidelines, resulting in the inclusion of 31 eligible studies. Data were extracted on AI approaches, algorithm types, dataset characteristics, and key performance metrics. The results revealed that machine learning (ML) was the most widely applied method (58.06%), followed by multi-method (22.58%), deep learning (16.13%), and large language models (3.23%). Among the ML algorithms, Random Forest and Decision Tree were the most commonly used due to their robustness and performance on structured datasets. In the deep learning category, CNN were the most frequently used models, especially for image-based and sequential input data. The datasets varied widely, including digital cognitive tasks, EEG, MRI, handwriting, and eye-tracking data, with several studies employing multimodal combinations. Ensemble and hybrid models demonstrated superior performance, with some achieving accuracy rates exceeding 98%. This review highlights that AI, particularly ML and multimodal ensemble methods, holds strong potential for improving the accuracy, scalability, and accessibility of dyslexia detection. Future research should prioritize large-scale, multimodal datasets, interpretable models, and adaptive learning systems to enhance real-world implementation.