Claim Missing Document
Check
Articles

Found 2 Documents
Search

Comparison of ResNet-50 and DenseNet-121 Architectures in Classifying Diabetic Retinopathy Yoga Pramana Putra, I Putu Gede; Ni Wayan Jeri Kusuma Dewi; Putu Surya Wedra Lesmana; I Gede Totok Suryawan; Putu Satria Udyana Putra
Indonesian Journal of Data and Science Vol. 6 No. 1 (2025): Indonesian Journal of Data and Science
Publisher : yocto brain

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.56705/ijodas.v6i1.232

Abstract

Introduction: Diabetic Retinopathy (DR) is a vision-threatening complication of diabetes that requires early and accurate diagnosis. Deep learning offers promising solutions for automating DR classification from retinal images. This study compares the performance of two convolutional neural network (CNN) architectures—ResNet-50 and DenseNet-121—for classifying DR severity levels. Methods: A dataset of 2,000 pre-processed and augmented retinal images was used, categorized into four classes: normal, mild, moderate, and severe. Both models were trained using two approaches: standard train-test split and Stratified K-Fold Cross Validation (k=5). Data augmentation techniques such as flipping, rotation, zooming, and translation were applied to enhance model generalization. The models were trained using the Adam optimizer with a learning rate of 0.001, dropout of 0.2, and learning rate adjustment via ReduceLROnPlateau. Performance was evaluated using accuracy, precision, recall, and F1-score. Results: ResNet-50 outperformed DenseNet-121 across all evaluation metrics. Without K-Fold, ResNet-50 achieved 84% accuracy compared to DenseNet-121’s 80%; with K-Fold, ResNet-50 scored 83% and DenseNet-121 81%. ResNet-50 also demonstrated better balance in class-wise classification, with higher recall and F1-score, especially for moderate and severe DR classes. Confusion matrices confirmed fewer misclassifications with ResNet-50. Conclusions: ResNet-50 provides superior accuracy and robustness in classifying DR severity levels compared to DenseNet-121. While K-Fold Cross Validation enhances model stability, it slightly reduces overall accuracy. These findings support the use of ResNet-50 in developing reliable deep learning-based screening tools for early DR detection in clinical practice
Comparative Analysis of Gradient-Based Optimizers in Feedforward Neural Networks for Titanic Survival Prediction Adi Pratama, I Putu; Ni Wayan Jeri Kusuma Dewi
Indonesian Journal of Data and Science Vol. 6 No. 1 (2025): Indonesian Journal of Data and Science
Publisher : yocto brain

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.56705/ijodas.v6i1.219

Abstract

Introduction: Feedforward Neural Networks (FNNs), or Multilayer Perceptrons (MLPs), are widely recognized for their capacity to model complex nonlinear relationships. This study aims to evaluate the performance of various gradient-based optimization algorithms in training FNNs for Titanic survival prediction, a binary classification task on structured tabular data. Methods: The Titanic dataset consisting of 891 passenger records was pre-processed via feature selection, encoding, and normalization. Three FNN architectures—small ([64, 32, 16]), medium ([128, 64, 32]), and large ([256, 128, 64])—were trained using eight gradient-based optimizers: BGD, SGD, Mini-Batch GD, NAG, Heavy Ball, Adam, RMSprop, and Nadam. Regularization techniques such as dropout and L2 penalty, along with batch normalization and Leaky ReLU activation, were applied. Training was conducted with and without a dynamic learning rate scheduler, and model performance was evaluated using accuracy, precision, recall, F1-score, and cross-entropy loss. Results: The Adam optimizer combined with the medium architecture achieved the highest accuracy of 82.68% and an F1-score of 0.77 when using a learning rate scheduler. RMSprop and Nadam also performed competitively. Models without learning rate schedulers generally showed reduced performance and slower convergence. Smaller architectures trained faster but yielded lower accuracy, while larger architectures offered marginal gains at the cost of computational efficiency. Conclusions: Adam demonstrated superior performance among the tested optimizers, especially when coupled with learning rate scheduling. These findings highlight the importance of optimizer choice and learning rate adaptation in enhancing FNN performance on tabular datasets. Future research should explore additional architectures and optimization strategies for broader generalizability