Claim Missing Document
Check
Articles

Found 27 Documents
Search

Performance Evaluation of AdamW, RMSProp, and Nadam Optimizers on EfficientNetB2 Model for Image Data Classification Damayanti, Fanita; Surono, Sugiyarto; Thobirin, Aris
International Journal of Advances in Data and Information Systems Vol. 7 No. 1 (2026): April 2026 - International Journal of Advances in Data and Information Systems
Publisher : Indonesian Scientific Journal

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.59395/ijadis.v7i1.1482

Abstract

This study examines the effect of different optimization algorithms on the performance of the EfficientNetB2 model in classifying lung and colon histopathology images. Three commonly used optimizers AdamW, RMSprop, and Nadam were analyzed to compare their influence on convergence trends, classification accuracy, and overall learning consistency. Using a five-class dataset covering benign and malignant tissue samples, the experimental results show that all three optimizers are able to deliver reliable predictions, although with varying performance characteristics. RMSprop emerges as the most effective optimizer, achieving the highest accuracy across all evaluation stages, with 99.05% during training, 99.16% on validation, and 98.72% on testing, along with the lowest loss values. This indicates that RMSprop facilitates faster and more stable convergence compared to the other two methods. AdamW also demonstrates strong predictive performance but shows limitations when distinguishing cancer types with closely similar morphological structures. Nadam attains high accuracy in early stages yet exhibits lower initial stability than RMSprop. Overall, pairing EfficientNetB2 with RMSprop provides the most optimal configuration for this classification task. These results offer valuable insights for designing better training strategies and strengthening the effectiveness of medical imaging based computer aided diagnostic systems.
Deep Convolutional Generative Adversarial Network-Enhanced Data Augmentation for Imbalance Facial Acne Severity Classification Using a Fine-Tuned EfficientNet-B1 Nisya, Khoirun; Surono, Sugiyarto; Thobirin, Aris
Jurnal Teknik Informatika (Jutif) Vol. 7 No. 2 (2026): JUTIF Volume 7, Number 2, April 2026
Publisher : Informatika, Universitas Jenderal Soedirman

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.52436/1.jutif.2026.7.2.5548

Abstract

Imbalanced datasets often hinder the generalization capability of Convolutional Neural Networks (CNNs) in medical image classification, leading to overfitting and reduced performance on minority classes. This study aims to develop an acne severity classification model using EfficientNet-B1 combined with geometric and photometric  augmentation, as well as  and Deep Convolutional Generative Adversarial Network (DCGAN)-based augmentation to address class imbalance. The dataset consists of 1,380 facial images categorized into four acne severity levels: Normal, Level 0, Level 1, and Level 2. Preprocessing includes RGB conversion, bilinear resizing, and center cropping. The data are split into training (80%), validation (10%), and testing (10%) sets. Geometric and photometric augmentation applies horizontal flipping, 45° rotation, color jittering, and random resized cropping, while DCGAN generates synthetic samples to balance minority classes. The EfficientNet-B1 model is fine-tuned using compound scaling, MBConv blocks, Swish activation, Batch Normalization, Cross-Entropy loss, and AdamW optimizer, with 5-fold cross-validation for robustness. Experimental results demonstrate that DCGAN-based augmentation achieves superior performance, with a test accuracy of 94% and an average F1-score of 0.93, outperforming geometric and photometric data augmentation (90% accuracy and 0.88 F1-score). DCGAN augmentation also significantly reduces misclassification between visually similar acne severity levels, particularly Level 0 and Level 1. These findings indicate that integrating DCGAN with EfficientNet-B1 effectively enhances generalization on imbalanced medical image datasets, providing a robust and replicable framework for acne severity classification and related medical imaging applications.
Application of EfficientNetV2-S Architecture with Focal Loss to Overcome Class Imbalances in Skin Cancer Classification Wati, Marfungah; Thobirin, Aris; Surono, Sugiyarto
International Journal of Advances in Data and Information Systems Vol. 7 No. 1 (2026): April 2026 - International Journal of Advances in Data and Information Systems
Publisher : Indonesian Scientific Journal

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.59395/ijadis.v7i1.1524

Abstract

Imbalanced class distributions in skin lesion image datasets can reduce the effectiveness of multiclass classification models. This research proposes a classification model based on the EfficientNetV2-S architecture with the application of two-stage training and loss functions that emphasize learning in classes with limited data. The models were trained using on-the-fly image augmentation and evaluated to assess generalization capabilities to the test data. In the initial stage, the model is trained by freezing the backbone and only updating the classifier layer. Next, fine-tuning was carried out on part of the backbone layer to adjust the representation of features to the image characteristics of the skin lesion. Evaluation is conducted through multiple training times with different random initializations to ensure consistency of results. The test results showed that the model experienced an improvement in performance after the fine-tuning process, with an accuracy of about 88% as well as an increase in F1-score values in some classes. Overall, the results indicate that the proposed approach may help improve classification performance when dealing with imbalanced skin cancer image data.
Implementation of DenseNet121 Based on Convolutional Neural Network with Geometric Augmentation for Breast Cancer Histopathology Image Classification Ariani, Nabilah Evi; Surono, Sugiyarto; Thobirin, Aris
CAUCHY: Jurnal Matematika Murni dan Aplikasi Vol 11, No 1 (2026): CAUCHY: JURNAL MATEMATIKA MURNI DAN APLIKASI
Publisher : Mathematics Department, Universitas Islam Negeri Maulana Malik Ibrahim Malang

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.18860/cauchy.v11i1.37896

Abstract

This study evaluates the performance of the DenseNet121 architecture for binary classification of breast cancer histopathological images using the BreakHis dataset. The model employs ImageNet pre-trained weights, fine-tuning, and geometric data augmentation to improve feature learning and generalization. To obtain more reliable results, three optimization algorithms (Adam, AdamW, and RMSprop) were evaluated through repeated experiments, and performance was reported using mean and standard deviation of test metrics. The experimental results demonstrate that DenseNet121 achieves consistently high classification performance across different optimizers, with the Adam optimizer showing the most stable results. These findings indicate that DenseNet121 combined with data augmentation provides an effective and robust approach for histopathological image classification while emphasizing the importance of repeated evaluation for reliable performance assessment.
ResNet-50 and ResNeXt-50 for Multiclass Classification of Chronic Wound Images under Gaussian Blur Andhika, Reynaldi Ikbar Surya; Surono, Sugiyarto; Thobirin, Aris
CAUCHY: Jurnal Matematika Murni dan Aplikasi Vol 11, No 1 (2026): CAUCHY: JURNAL MATEMATIKA MURNI DAN APLIKASI
Publisher : Mathematics Department, Universitas Islam Negeri Maulana Malik Ibrahim Malang

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.18860/cauchy.v11i1.40323

Abstract

Chronic wound image classification is important for supporting the assessment of conditions such as diabetic foot ulcers (DFU) and pressure ulcers (PU). While convolutional neural network (CNN)--based approaches have shown promising results, most previous studies focus on binary classification and rarely evaluate robustness in multiclass chronic wound scenarios. This study investigates multiclass classification of chronic wound images, distinguishing DFU, PU, and Normal Skin, using ResNet-50 and ResNeXt-50 architectures. A total of 2,146 publicly available images were stratified at the image level into training (70%), validation (15%), and test (15%) sets. Both models were trained under an identical configuration using data augmentation and class-weighted loss. On clean test images, ResNet-50 and ResNeXt-50 achieved strong and comparable performance, with accuracies of 0.9877 and 0.9938 and macro-averaged F1-scores of 0.9866 and 0.9928, respectively. Robustness was evaluated by applying Gaussian blur at the inference stage to simulate image defocus. Under stronger blur ((σ = 2.0), ResNeXt-50 maintained higher performance (accuracy 0.9723, macro-F1 0.9679) than ResNet-50 (accuracy 0.9200, macro-F1 0.9123). These results highlight the contribution of this study in evaluating robustness to blur in multiclass chronic wound image classification, while emphasizing that robustness is limited to resistance against image blur or defocus.
Hybrid Otsu Morphological Pre-processing for EfficientNetB4 Based Acute Lymphoblastic Leukemia Classification Audina, Maretta Mia; Surono, Sugiyarto; Thobirin, Aris; Wen, Goh Khang
CAUCHY: Jurnal Matematika Murni dan Aplikasi Vol 11, No 1 (2026): CAUCHY: JURNAL MATEMATIKA MURNI DAN APLIKASI
Publisher : Mathematics Department, Universitas Islam Negeri Maulana Malik Ibrahim Malang

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.18860/cauchy.v11i1.40730

Abstract

Image quality plays a crucial role in improving the performance of image-based classification models, particularly when raw images exhibit noise, uneven illumination, and unclear object boundaries. This study proposes a hybrid segmentation approach to enhance object separation by reducing background interference and refining object contours. The method combines Otsu thresholding for initial object–background separation with elliptical morphological operations to improve region consistency and boundary definition.The segmented grayscale images are replicated into three channels and resized to 224×224 pixels before being used as input to an EfficientNetB4-based classification model optimized with the AdamW optimizer and fine-tuning. Experimental results under identical data splits, training settings, and fine-tuning protocols show that the proposed segmentation-based method achieves a final test accuracy of 97%, outperforming the baseline model trained on raw images (95% test accuracy) using the same EfficientNetB4-AdamW configuration. These results demonstrate that incorporating segmentation in the preprocessing stage effectively enhances discriminative feature learning and improves overall classification performance.
Analysis of Color Space Transformations on MobileNetV2 Performance for Image Classification Vironica, Sherlyn; Surono, Sugiyarto; Thobirin, Aris
CAUCHY: Jurnal Matematika Murni dan Aplikasi Vol 11, No 1 (2026): CAUCHY: JURNAL MATEMATIKA MURNI DAN APLIKASI
Publisher : Mathematics Department, Universitas Islam Negeri Maulana Malik Ibrahim Malang

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.18860/cauchy.v11i1.41353

Abstract

This study analyzes the effect of color space transformation on the performance of MobileNetV2 for rice leaf disease classification using RGB, HSV, CIELab, and their combinations. The RGB color space is used as the baseline representation, while HSV and CIELab are applied to provide alternative representations of color information. In addition, a dual-stream architecture is employed to combine different color spaces for feature extraction. The results show that the choice of color space influences classification performance. In the single color-space scenario, RGB achieves the highest accuracy of 91.42%, while in the combined scenario, the RGB+CIELab model achieves the best performance with an accuracy of 97.00%. These findings suggest that the use of multiple color spaces can provide richer feature representations and may improve classification performance. Furthermore, the results indicate that optimizing input representation plays an important role in improving model performance, particularly when using lightweight architectures such as MobileNetV2. This study shows that color space transformation can improve classification performance in the rice leaf disease dataset used in this study.