Putra, Permana Langgeng Wicaksono Ellwid
Unknown Affiliation

Published : 4 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 4 Documents
Search

Comparing Haar Cascade and YOLOFACE for Region of Interest Classification in Drowsiness Detection Andrean, Muhammad Niko; Shidik, Guruh Fajar; Naufal, Muhammad; Zami, Farrikh Al; Winarno, Sri; Azies, Harun Al; Putra, Permana Langgeng Wicaksono Ellwid
JURNAL MEDIA INFORMATIKA BUDIDARMA Vol 8, No 1 (2024): Januari 2024
Publisher : Universitas Budi Darma

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30865/mib.v8i1.7167

Abstract

Driver drowsiness poses a serious threat to road safety, potentially leading to fatal accidents. Current research often relies on facial features, specific eye components, and the mouth for drowsiness classification. This causes a potential bias in the classification results. Therefore, this study shifts its focus to both eyes to mitigate potential biases in drowsiness classification.This research aims to compare the accuracy of drowsiness detection in drivers using two different image segmentation methods, namely Haar Cascade and YOLO-face, followed by classification using a decision tree algorithm. The dataset consists of 22,348 images of drowsy driver faces and 19,445 images of non-drowsy driver faces. The segmentation results with YOLO-face prove capable of producing a higher-quality Region of Interest (ROI) and training data in the form of eye images compared to segmentation results using the Haar Cascade method. After undergoing grid search and 10-fold cross-validation processes, the decision tree model achieved the highest accuracy using the entropy parameter, reaching 98.54% for YOLO-face segmentation results and 98.03% for Haar Cascade segmentation results. Despite the slightly higher accuracy of the model utilizing YOLO-face data, the YOLO-face method requires significantly more data processing time compared to the Haar Cascade method. The overall research results indicate that implementing the ROI concept in input images can enhance the focus and accuracy of the system in recognizing signs of drowsiness in drivers.
Pengenalan Ekspresi Wajah Menggunakan Transfer Learning MobileNetV2 dan EfficientNet-B0 dalam Memprediksi Perkelahian Handayani, Ni Made Kirei Kharisma; Hidayat, Erwin Yudi; Naufal, Muhammad; Putra, Permana Langgeng Wicaksono Ellwid
JURNAL MEDIA INFORMATIKA BUDIDARMA Vol 8, No 1 (2024): Januari 2024
Publisher : Universitas Budi Darma

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30865/mib.v8i1.7048

Abstract

Expressions play an important role in recognizing someone's emotions. Recognizing emotions can help understand someone's condition and be a sign of their possible actions. Fighting is one of the violences that occur due to someone's negative emotions that need to be prevented and treated immediately. In this study, expression recognition is used to predict the possibility of a fight based on the expression shown by a person. The dataset used is FER-2013 which has been modified into two labels, namely "Yes" and "No". The data undergoes a preprocessing step which includes resizing and normalization. Model experiments using transfer learning from the MobileNetV2 and EfficientNet-B0 architectures have been modified by performing hyperparameter and fine tuning which includes freezing the layer by 25% in the first layers of each model and adding several layers such as flatten and dense. In the training process, some parameters used are 30 epochs, batch size 32, and Adam optimization with a learning rate of 0.0001. Model performance evaluation is measured using Confusion Matrix, then the results are compared and obtained the model that produces the best accuracy value is EfficientNet-B0 which is 82%. Meanwhile, based on the training time and model weight, MobileNetV2 is 1 hour 1 minute 43 seconds faster and 21.57 MB smaller than EfficientNet-B0.
OPTIMIZING BUTTERFLY CLASSIFICATION THROUGH TRANSFER LEARNING: FINE-TUNING APPROACH WITH NASNETMOBILE AND MOBILENETV2 Putri, Ni Kadek Devi Adnyaswari; Luthfiarta, Ardytha; Putra, Permana Langgeng Wicaksono Ellwid
Jurnal Teknik Informatika (Jutif) Vol. 5 No. 3 (2024): JUTIF Volume 5, Number 3, June 2024
Publisher : Informatika, Universitas Jenderal Soedirman

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.52436/1.jutif.2024.5.3.1583

Abstract

Butterflies play a significant role in ecosystems, especially as indicators of the state of biological balance. Each butterfly species is distinctly different, although some also show differences with very subtle traits. Etymologists recognize butterfly species through manual taxonomy and image analysis, which is time-consuming and costly. Previous research has tried to use computer vision technology, but it has shortcomings because it uses a small distribution of data, resulting in a lack of programs for recognizing various other types of butterflies. Therefore, this research is made to apply computer vision technology with the application of transfer learning, which can improve pattern recognition on image data without the need to start the training process from scratch. Transfer learning has a main method, which is fine-tuning. Fine-tuning is the process of matching parameter values that match the architecture and freezing certain layers of the architecture. The use of this fine-tuning process causes a significant increase in accuracy. The difference in accuracy results can be seen before and after using the fine-tuning process. Thus, this research focuses on using two Convolutional Neural Network architectures, namely MobileNetV2 and NASNetMobile. Both architectures have satisfactory accuracy in classifying 75 butterfly species by applying the transfer learning method. The results achieved on both architectures using fine-tuning can produce an accuracy of 86% for MobileNetV2, while NASNetMobile has a slight difference in accuracy of 85%.
A Comparative Study of MobileNet Architecture Optimizer for Crowd Prediction putra, Permana langgeng wicaksono ellwid; Naufal, Muhammad; Hidayat, Erwin Yudi
Jurnal Informatika: Jurnal Pengembangan IT Vol 8, No 3 (2023)
Publisher : Politeknik Harapan Bersama

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30591/jpit.v8i3.5703

Abstract

Artificial intelligence technology has grown quickly in recent years. Convolutional neural network (CNN) technology has also been developed as a result of these developments. However, because convolutional neural networks entail several calculations and the optimization of numerous matrices, their application necessitates the utilization of appropriate technology, such as GPUs or other accelerators. Applying transfer learning techniques is one way to get around this resource barrier. MobileNetV2 is an example of a lightweight convolutional neural network architecture that is appropriate for transfer learning. The objective of the research is to compare the performance of SGD and Adam using the MobileNetv2 convolutional neural network architecture. Model training uses a learning rate of 0.0001, batch size of 32, and binary cross-entropy as the loss function. The training process is carried out for 100 epochs with the application of early stop and patience for 10 epochs. Result of this research is both models using Adam's optimizer and SGD show good capability in crowd classification. However, the model with the SGD optimizer has a slightly superior performance even with less accuracy than model with Adam optimizer. Which is model with Adam has accuracy 96%, while the model with SGD has 95% accuracy. This is because in the graphical results model with the SGD optimizer shows better stability than the model with the Adam optimizer. The loss graph and accuracy graph of the SGD model are more consistent and tend to experience lower fluctuations than the Adam model.