Gracia Rizka Pasfica
Institut Teknologi Telkom Purwokerto

Published : 2 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 2 Documents
Search

Comparative Study of VGG16 and MobileNetV2 for Masked Face Recognition Faisal Dharma Adhinata; Nia Annisa Ferani Tanjung; Widi Widayat; Gracia Rizka Pasfica; Fadlan Raka Satura
Jurnal Ilmiah Teknik Elektro Komputer dan Informatika Vol 7, No 2 (2021): August
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/jiteki.v7i2.20758

Abstract

Indonesia is one of the countries affected by the coronavirus pandemic, which has taken too many lives. The coronavirus pandemic forces us to continue to wear masks daily, especially when working to break the chain of the spread of the coronavirus. Before the pandemic, face recognition for attendance used the entire face as input data, so the results were accurate. However, during this pandemic, all employees use masks, including attendance, which can reduce the level of accuracy when using masks. In this research, we use a deep learning technique to recognize masked faces. We propose using transfer learning pre-trained models to perform feature extraction and classification of masked face image data. The use of transfer learning techniques is due to the small amount of data used. We analyzed two transfer learning models, namely VGG16 and MobileNetV2. The parameters of batch size and number of epochs were used to evaluate each model. The best model is obtained with a batch size value of 32 and the number of epochs 50 in each model. The results showed that using the MobileNetV2 model was more accurate than VGG16, with an accuracy value of 95.42%. The results of this study can provide an overview of the use of transfer learning techniques for masked face recognition.
Implementasi Convolutional Neural Network Untuk Deteksi Emosi Melalui Wajah Rizki Rafiif Amaanullah; Gracia Rizka Pasfica; Satria Adi Nugraha; Mohammad Rifqi Zein; Faisal Dharma Adhinata
JTIM : Jurnal Teknologi Informasi dan Multimedia Vol 3 No 4 (2022): February
Publisher : Puslitbang Sekawan Institute Nusa Tenggara

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35746/jtim.v3i4.189

Abstract

The human emotional condition can be reflected in speech, gestures, and especially facial expressions. The problem that is often faced is that humans tend to be subjective in assessing people's emotions. Humans can easily guess someone's emotions through the expressions shown, as well as computers. Computers can think like humans if they are given an algorithm for human thinking or artificial intelligence. This research will be an interaction between humans and computers in analyzing human expressions. This research was conducted to prove whether the implementation of CNN (Convolutional Neural Network) can be used in detecting human emotions or not. The material needed to conduct facial recognition research is a dataset in images of various kinds of human expressions. Based on the dataset that has been obtained, the images that have been collected are divided into two parts, namely training data and test data, where each training data and test data has seven different emotion subfolders. Each category of images is 35 thousand data which will later be trimmed to around a few thousand data to balance the dataset. According to their class, these various expressions will be classified into several emotions: angry emotions, happy emotions, fearful emotions, disgusting emotions, surprising emotions, neutral emotions, and sad emotions. The results showed that from the calculation of 40 epochs, 81.92% was obtained for training and 81.69% for testing.