p-Index From 2021 - 2026
0.408
P-Index
This Author published in this journals
All Journal bit-Tech
Ani Dijah Rahajoe
Universitas Pembangunan Nasional "Veteran" Jawa Timur

Published : 2 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 2 Documents
Search

Comparison of Fine-Tuning InceptionV3 and Xception for Eye Disease Classification Based on Fundus Images Irsyad Rafi Naufaldi; Ani Dijah Rahajoe; Eva Yulia Puspaningrum
bit-Tech Vol. 8 No. 2 (2025): bit-Tech
Publisher : Komunitas Dosen Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.32877/bt.v8i2.3195

Abstract

Eye diseases represent a major global health concern that can lead to visual impairment and even blindness if not detected early. The shortage of ophthalmologists and unequal distribution of medical services highlight the need for automatic eye disease detection system increasingly essential. Therefore, the role of Artificial Intelligence (AI), particularly Deep Learning, is highly needed. This study aims to compare the performance of two CNN architectures InceptionV3 and Xception. Unlike previous studies, this paper provides a comparative Fine-Tuning analysis of two CNN models on multiclass eye disease. The approach applied is transfer learning with a fine-tuning technique on several final layers to achieve higher accuracy by optimizing pretrained models using large-scale datasets such as ImageNet. The dataset consists of 4,184 fundus images covering multiple eye disease with balanced class distribution, ensuring diversity that supports model generalization. Divided into train, valid, and test sets with a ratio of 70:15:15. The training employed Adam optimizer, a batch size of 16, a learning rate of 0.0001, and implements early stopping to prevent overfitting. The performance of the model was assessed using evaluation metrics including accuracy, precision, recall, and F1-score. Experimental results indicate that the Xception model achieved superior performance with an accuracy of 87.78%, precision of 0.89, recall of 0.88, and an F1-score of 0.88, outperforming InceptionV3 with an accuracy of 85.56%, indicates the model is reliable for preliminary diagnosis. These findings suggest that the architecture in Xception is more efficient in extracting features from limited yet complex medical datasets.
Waste Classification Using YOLOv8 and One Factor At a Time Muhammad Aldi Maulana; Eva Yulia Puspaningrum; Ani Dijah Rahajoe
bit-Tech Vol. 8 No. 2 (2025): bit-Tech
Publisher : Komunitas Dosen Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.32877/bt.v8i2.3209

Abstract

Solid waste management has become a significant global environmental challenge that affects both ecosystem sustainability and human well-being. The increasing volume of waste generated from daily human activities highlights the urgent need for technology-based solutions that support efficient waste sorting, recycling, and resource recovery. This study proposes an automatic waste classification system using the YOLOv8 algorithm, a state-of-the-art deep learning model capable of performing real-time object detection with high accuracy. A dataset consisting of 1,800 labeled waste images representing five main categories plastic, glass, metal, paper, and organic was used for model training and evaluation. To enhance performance, the One Factor at a Time (OFAT) approach was applied for hyperparameter optimization, focusing on learning rate, batch size, and number of epochs. Two models were compared: the default YOLOv8 configuration and the optimized YOLOv8 OFAT model. Experimental results show that the optimized YOLOv8 OFAT achieved a mAP@0.5:0.95 of 86.1%, slightly higher than the default YOLOv8 model with 85.8%. Although the improvement of 0.3% appears modest, it indicates better model consistency and reliability across various data conditions. The integration of the OFAT technique into YOLOv8 represents a novel contribution, demonstrating that systematic hyperparameter tuning can significantly enhance the efficiency and robustness of automated waste detection systems, thereby supporting environmental sustainability and the realization of a green economy.