Kurniawan, Muhammad Bayu
Unknown Affiliation

Published : 2 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 2 Documents
Search

COMPARATIVE ANALYSIS OF CONTRAST ENHANCEMENT METHODS FOR CLASSIFICATION OF PEKALONGAN BATIK MOTIFS USING CONVOLUTIONAL NEURAL NETWORK Kurniawan, Muhammad Bayu; Utami, Ema
Jurnal Teknik Informatika (Jutif) Vol. 5 No. 6 (2024): JUTIF Volume 5, Number 6, Desember 2024
Publisher : Informatika, Universitas Jenderal Soedirman

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.52436/1.jutif.2024.5.6.2621

Abstract

Batik artists in Pekalongan have freedom in determining motifs, creating a diversity of distinctive batik motifs. However, this diversity often makes it difficult for people to recognize the different motifs, as visual identification requires in-depth knowledge. The lack of understanding about Pekalongan batik is a challenge in recognizing these motifs. To overcome this challenge, an efficient and accurate method of motif identification is needed. This study aims to analyze the efficacy of contrast enhancement methods in improving the classification results of Pekalongan batik motifs using convolutional neural networks (CNN) with ResNet50 architecture. The dataset of 480 images was collected directly from Museum Batik Pekalongan and split into three distinct categories: 15% for validation, 15% for testing, and 70% for training. Two contrast enhancement methods: contrast limited adaptive histogram equalization (CLAHE) and histogram equalization (HE), were applied to create additional datasets. The Adam optimizer was used to train the model over 50 epochs at a learning rate of 0.001. The test results show that the original dataset contrast-enhanced with CLAHE reaches the best accuracy of 83%, followed by the original dataset contrast-enhanced with HE at 81%, and the original dataset at 76%. This finding shows that the application of contrast enhancement methods, especially CLAHE, can increase the model's accuracy in classifying batik motifs.
Performance Comparison of ResNet50, VGG16, and MobileNetV2 for Brain Tumor Classification on MRI Images Kurniawan, Muhammad Bayu; Utami, Ema
Sistemasi: Jurnal Sistem Informasi Vol 14, No 2 (2025): Sistemasi: Jurnal Sistem Informasi
Publisher : Program Studi Sistem Informasi Fakultas Teknik dan Ilmu Komputer

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.32520/stmsi.v14i2.5054

Abstract

Brain tumor classification using MRI images is a significant challenge in medical diagnosis, requiring models with high accuracy and efficient training. This study aims to compare the performance of three Convolutional Neural Network (CNN) models—ResNet50, VGG16, and MobileNetV2—for brain tumor classification based on MRI images. The dataset consists of four brain tumor categories: glioma, meningioma, pituitary, and no tumor, with data split into training, validation, and testing sets. Each model was evaluated using metrics including accuracy, precision, recall, F1-score, specificity, and training time to assess their effectiveness in predicting brain tumors with optimal accuracy and efficiency. Experimental results indicate that VGG16 achieved the best overall performance, with an accuracy of 94.93%, precision of 94.68%, and specificity of 98.33%, while also having the shortest training time of 47.15 minutes. MobileNetV2 demonstrated strong performance with a recall of 94.08% but required a longer training time of 79.53 minutes. ResNet50 recorded the lowest accuracy (91.67%) despite excelling in precision (91.79%), but it underperformed in recall (91.25%) and specificity (97.2%). Overall, this study confirms that VGG16 is the most efficient and effective model for MRI-based brain tumor classification.