Pinem, Joshua
Unknown Affiliation

Published : 2 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 2 Documents
Search

Explainable Ensemble Learning Framework with SMOTE, SHAP and LIME for Predicting 30-Day Readmission in Diabetic Patients Pinem, Joshua; Astuti, Widi; Adiwijaya
Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) Vol 9 No 5 (2025): October 2025
Publisher : Ikatan Ahli Informatika Indonesia (IAII)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.29207/resti.v9i5.6977

Abstract

Hospital readmission among diabetic patients poses a significant burden on healthcare systems due to its frequency and associated costs. This study presents a machine learning framework for predicting 30-day readmission in diabetic patients using the Diabetes 130-US Hospitals dataset. The framework integrates data preprocessing, SMOTE for class balancing, ensemble learning, and explainable AI (SHAP and LIME) to enhance both accuracy and interpretability. Multiple models were evaluated, and the best performance was achieved by a weighted ensemble with a recall of 89.43% and an F1-score of 0.6612, indicating strong sensitivity. Explainability analysis using SHAP and LIME highlighted key predictors, notably Medication Change Status and Inpatient Admissions, which are clinically relevant. By combining predictive performance with transparent explanations, the proposed framework offers a practical and trustworthy tool for clinical decision support in managing diabetic readmissions.
Benchmarking Transformer Architectures for Chest X-ray Classification Pinem, Joshua; Astuti, Widi; Adiwijaya
Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) Vol 10 No 1 (2026): February 2026
Publisher : Ikatan Ahli Informatika Indonesia (IAII)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.29207/resti.v10i1.7132

Abstract

Lung diseases remain a major global health concern, necessitating accurate and timely diagnosis. Chest X-ray (CXR) imaging is widely used but challenging to interpret due to overlapping radiographic features and subjective variability among radiologists. Deep learning approaches, particularly Convolutional Neural Networks (CNNs), have shown promise but are limited in capturing global spatial dependencies. Vision Transformers (ViTs) overcome this limitation through self-attention, making them increasingly attractive for medical image analysis. This study systematically evaluates 13 Transformer-based architectures across three CXR datasets with distinct tasks: Pneumonia (3-class: Normal, Bacterial, Viral), COVID-QU-Ex (3-class: Normal, Non-COVID Pneumonia, COVID-19), and Tuberculosis (2-class: Normal, Tuberculosis). All models were trained under a unified setup with consistent preprocessing, augmentation, and evaluation protocols. To improve robustness, a soft voting ensemble of the top five models was also implemented. Results demonstrate that Transformer-based models provide highly competitive performance. On the Pneumonia dataset, the ensemble achieved an accuracy of 0.8743 and F1-score of 0.8615, surpassing several single models such as DeiT-Base (F1 = 0.8725). On COVID-QU-Ex, the ensemble soft voting obtained 0.9593 accuracy and 0.9582 F1-score, effectively balancing precision and recall. On Tuberculosis, ViT-B/16 and MobileViT-S achieved perfect performance (F1 = 1.0), likely influenced by dataset imbalance. These findings highlight the clinical potential of Transformer-based models, particularly when combined through ensembles, for robust and accurate CXR classification.