Claim Missing Document
Check
Articles

Found 3 Documents
Search
Journal : Journal of Computing Theories and Applications

Butterflies Recognition using Enhanced Transfer Learning and Data Augmentation Adityawan, Harish Trio; Farroq, Omar; Santosa, Stefanus; Islam, Hussain Md Mehedul; Sarker, Md Kamruzzaman; Setiadi, De Rosal Ignatius Moses
Journal of Computing Theories and Applications Vol. 1 No. 2 (2023): JCTA 1(2) 2023
Publisher : Universitas Dian Nuswantoro

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.33633/jcta.v1i2.9443

Abstract

Butterflies’ recognition serves a crucial role as an environmental indicator and a key factor in plant pollination. The automation of this recognition process, facilitated by Convolutional Neural Networks (CNNs), can expedite this task. Several pre-trained CNN models, such as VGG, ResNet, and Inception, have been widely used for this purpose. However, the scope of previous research has been somewhat constrained, focusing only on a maximum of 15 classes. This study proposes to modify the CNN InceptionV3 model and combine it with three data augmentations to recognize up to 100 butterfly species. To curb overfitting, this study employs a series of data augmentation techniques. In parallel, we refine the InceptionV3 model by reducing the number of layers and integrating four new layers. The test results demonstrate that our proposed model achieves an impressive accuracy of 99.43% for 15 classes with only 10 epochs, exceeding prior models by approximately 5%. When extended to 100 classes, the model maintains a high accuracy rate of 98.49% with 50 epochs. The proposed model surpasses the performance of standard pre-trained models, including VGG16, ResNet50, and InceptionV3, illustrating its potential for broader application.
Exploring DQN-Based Reinforcement Learning in Autonomous Highway Navigation Performance Under High-Traffic Conditions Nugroho, Sandy; Setiadi, De Rosal Ignatius Moses; Islam, Hussain Md Mehedul
Journal of Computing Theories and Applications Vol. 1 No. 3 (2024): JCTA 1(3) 2024
Publisher : Universitas Dian Nuswantoro

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62411/jcta.9929

Abstract

Driving in a straight line is one of the fundamental tasks for autonomous vehicles, but it can become complex and challenging, especially when dealing with high-speed highways and dense traffic conditions. This research aims to explore the Deep-Q Networking (DQN) model, which is one of the reinforcement learning (RL) methods, in a highway environment. DQN was chosen due to its proficiency in handling complex data through integrated neural network approximations, making it capable of addressing high-complexity environments. DQN simulations were conducted across four scenarios, allowing the agent to operate at speeds ranging from 60 to nearly 100 km/h. The simulations featured a variable number of vehicles/obstacles, ranging from 20 to 80, and each simulation had a duration of 40 seconds within the Highway-Env simulator. Based on the test results, the DQN method exhibited excellent performance, achieving the highest reward value in the first scenario, 35.6117 out of a maximum of 40, and a success rate of 90.075%.
Enhanced Vision Transformer and Transfer Learning Approach to Improve Rice Disease Recognition Rachman, Rahadian Kristiyanto; Setiadi, De Rosal Ignatius Moses; Susanto, Ajib; Nugroho, Kristiawan; Islam, Hussain Md Mehedul
Journal of Computing Theories and Applications Vol. 1 No. 4 (2024): JCTA 1(4) 2024
Publisher : Universitas Dian Nuswantoro

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62411/jcta.10459

Abstract

In the evolving landscape of agricultural technology, recognizing rice diseases through computational models is a critical challenge, predominantly addressed through Convolutional Neural Networks (CNN). However, the localized feature extraction of CNNs often falls short in complex scenarios, necessitating a shift towards models capable of global contextual understanding. Enter the Vision Transformer (ViT), a paradigm-shifting deep learning model that leverages a self-attention mechanism to transcend the limitations of CNNs by capturing image features in a comprehensive global context. This research embarks on an ambitious journey to refine and adapt the ViT Base(B) transfer learning model for the nuanced task of rice disease recognition. Through meticulous reconfiguration, layer augmentation, and hyperparameter tuning, the study tests the model's prowess across both balanced and imbalanced datasets, revealing its remarkable ability to outperform traditional CNN models, including VGG, MobileNet, and EfficientNet. The proposed ViT model not only achieved superior recall (0.9792), precision (0.9815), specificity (0.9938), f1-score (0.9791), and accuracy (0.9792) on challenging datasets but also established a new benchmark in rice disease recognition, underscoring its potential as a transformative tool in the agricultural domain. This work not only showcases the ViT model's superior performance and stability across diverse tasks and datasets but also illuminates its potential to revolutionize rice disease recognition, setting the stage for future explorations in agricultural AI applications.