Putri Prasetia, Cintia
Unknown Affiliation

Published : 1 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 1 Documents
Search

Efficient Real and Fake Face detection Using ResNet18 Putri Prasetia, Cintia
JOURNAL OF INFORMATICS AND TELECOMMUNICATION ENGINEERING Vol. 9 No. 1 (2025): Issues July 2025
Publisher : Universitas Medan Area

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.31289/jite.v9i1.15128

Abstract

This study aims to develop a classification model for distinguishing between real and fake facial images using a lightweight Convolutional Neural Network architecture, specifically ResNet18. The research addresses the growing misuse of synthetic facial images in biometric security systems and identity verification processes. A combined dataset was used, consisting of secondary data from the 140K Real and Fake Faces dataset on Kaggle and primary images captured via a local camera. Preprocessing steps included resizing all images to 128×128 pixels, horizontal flipping, and normalization. The model was trained for five epochs using the FastAI framework with the one-cycle learning rate strategy. The experimental results show that the ResNet18 model achieved a test accuracy of 92.1%, with balanced precision, recall, and F1-score across both classes. Evaluation metrics were supported by a classification report and confusion matrix. The model contains 11.7 million parameters and completed training in approximately 9 minutes and 42 seconds, indicating its computational efficiency on a T4 GPU environment. While the study referenced deeper architectures such as ResNet34 and ResNet50 for context, no direct comparative experiments were conducted. Therefore, conclusions regarding relative performance are limited to the reported metrics of ResNet18 alone. The findings support the feasibility of deploying ResNet18-based models for real-time facial image classification in resource-constrained environments. Future research is encouraged to explore architecture comparisons, more advanced augmentation techniques, and evaluation using video-based inputs for improved generalization