Claim Missing Document
Check
Articles

Found 3 Documents
Search

Comparison of ResNet-50 and DenseNet-121 Architectures in Classifying Diabetic Retinopathy Yoga Pramana Putra, I Putu Gede; Ni Wayan Jeri Kusuma Dewi; Putu Surya Wedra Lesmana; I Gede Totok Suryawan; Putu Satria Udyana Putra
Indonesian Journal of Data and Science Vol. 6 No. 1 (2025): Indonesian Journal of Data and Science
Publisher : yocto brain

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.56705/ijodas.v6i1.232

Abstract

Introduction: Diabetic Retinopathy (DR) is a vision-threatening complication of diabetes that requires early and accurate diagnosis. Deep learning offers promising solutions for automating DR classification from retinal images. This study compares the performance of two convolutional neural network (CNN) architectures—ResNet-50 and DenseNet-121—for classifying DR severity levels. Methods: A dataset of 2,000 pre-processed and augmented retinal images was used, categorized into four classes: normal, mild, moderate, and severe. Both models were trained using two approaches: standard train-test split and Stratified K-Fold Cross Validation (k=5). Data augmentation techniques such as flipping, rotation, zooming, and translation were applied to enhance model generalization. The models were trained using the Adam optimizer with a learning rate of 0.001, dropout of 0.2, and learning rate adjustment via ReduceLROnPlateau. Performance was evaluated using accuracy, precision, recall, and F1-score. Results: ResNet-50 outperformed DenseNet-121 across all evaluation metrics. Without K-Fold, ResNet-50 achieved 84% accuracy compared to DenseNet-121’s 80%; with K-Fold, ResNet-50 scored 83% and DenseNet-121 81%. ResNet-50 also demonstrated better balance in class-wise classification, with higher recall and F1-score, especially for moderate and severe DR classes. Confusion matrices confirmed fewer misclassifications with ResNet-50. Conclusions: ResNet-50 provides superior accuracy and robustness in classifying DR severity levels compared to DenseNet-121. While K-Fold Cross Validation enhances model stability, it slightly reduces overall accuracy. These findings support the use of ResNet-50 in developing reliable deep learning-based screening tools for early DR detection in clinical practice
Comparative Analysis of Gradient-Based Optimizers in Feedforward Neural Networks for Titanic Survival Prediction Adi Pratama, I Putu; Ni Wayan Jeri Kusuma Dewi
Indonesian Journal of Data and Science Vol. 6 No. 1 (2025): Indonesian Journal of Data and Science
Publisher : yocto brain

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.56705/ijodas.v6i1.219

Abstract

Introduction: Feedforward Neural Networks (FNNs), or Multilayer Perceptrons (MLPs), are widely recognized for their capacity to model complex nonlinear relationships. This study aims to evaluate the performance of various gradient-based optimization algorithms in training FNNs for Titanic survival prediction, a binary classification task on structured tabular data. Methods: The Titanic dataset consisting of 891 passenger records was pre-processed via feature selection, encoding, and normalization. Three FNN architectures—small ([64, 32, 16]), medium ([128, 64, 32]), and large ([256, 128, 64])—were trained using eight gradient-based optimizers: BGD, SGD, Mini-Batch GD, NAG, Heavy Ball, Adam, RMSprop, and Nadam. Regularization techniques such as dropout and L2 penalty, along with batch normalization and Leaky ReLU activation, were applied. Training was conducted with and without a dynamic learning rate scheduler, and model performance was evaluated using accuracy, precision, recall, F1-score, and cross-entropy loss. Results: The Adam optimizer combined with the medium architecture achieved the highest accuracy of 82.68% and an F1-score of 0.77 when using a learning rate scheduler. RMSprop and Nadam also performed competitively. Models without learning rate schedulers generally showed reduced performance and slower convergence. Smaller architectures trained faster but yielded lower accuracy, while larger architectures offered marginal gains at the cost of computational efficiency. Conclusions: Adam demonstrated superior performance among the tested optimizers, especially when coupled with learning rate scheduling. These findings highlight the importance of optimizer choice and learning rate adaptation in enhancing FNN performance on tabular datasets. Future research should explore additional architectures and optimization strategies for broader generalizability
Sales Forecasting Analysis Using Fuzzy Time Series and Simple Linear Regression Methods at Toko Ari Ni Luh Sri April Yanti; Ni Wayan Jeri Kusuma Dewi; I Gede Made Yudi Antara; Desak Made Dwi Utami Putra; Putu Wirayudi Aditama
Indonesian Journal of Data and Science Vol. 6 No. 3 (2025): Indonesian Journal of Data and Science
Publisher : yocto brain

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.56705/ijodas.v6i3.368

Abstract

Introduction: Forecasting, often referred to as prediction, can actually help assess conditions or predict future sales. In the business world forecasting is crucial because it can help companies plan their future operations especially when faced with sudden increases and decreases in sales and stockpiles. Especially in retail forecasting is extremely helpful in purchasing merchandise, managing inventory in the warehouse, and reducing losses due to changing customer preferences. Ari's shop, located on Jalan Raya Samu, Singapadu Kaler, Gianyar, Bali, also experiences increases and decreases in monthly sales. Therefore, it is hoped that this sales forecasting can help maintain more stable and smooth operations. Methods: This study used two methods to forecast sales: Fuzzy Time Series (FTS) and Simple Linear Regression (SLR), to predict figures from Ari's shop's monthly sales data. Both methods use the same dataset, which is Ari's Store sales data for 13 months, from January 2024 to January 2025. The forecast results are then compared using the Mean Absolute Percentage Error (MAPE), which measures the model's accuracy in predicting results. Results: Based on the sales forecasts performed, both models produced fairly accurate predictions due to their low MAPE values, below 10%. Of the two methods, Simple Linear Regression provided more accurate results with a MAPE of 3.57%. Meanwhile, the Fuzzy Time Series method produced a MAPE of 5.53%. This difference in values indicates that the linear regression model is more appropriate for Ari's Store sales data, especially since the data pattern tends to follow a linear trend.