The development of Artificial Intelligence (AI) technologies, particularly deep learning has led to the emergence of innovative applications such as deepfake technology, which enables the realistic manipulation of digital images and videos. While this technology offers positive applications in fields such as entertainment and education, it also poses significant risks of misuse, particularly in the dissemination of false information and violations of privacy. Therefore, deepfake detection has become a crucial aspect in preserving the authenticity of digital content. This study aims to analyze the effectiveness of transfer learning methods in detecting deepfake images using VGG16, VGG19, and ResNet50 architectures. The research employs a dataset of deepfake and real images sourced from Kaggle, comprising 10,826 facial images with a resolution of 256 × 256 pixels, evenly balanced between authentic and manipulated content. The data are split in an 80:20 ratio for training and testing purposes. Each model is trained using identical parameter configurations. The performance evaluation of the models was conducted using confusion matrix metrics, including accuracy, precision, recall, and F1-score. The results indicate that the VGG16 model achieved the best performance, with an accuracy of 76%, followed by VGG19 at 72%, and ResNet50 at 58%. VGG16 also outperformed the other models in terms of precision, recall, and F1-score, demonstrating more effective performance in identifying visual manipulation patterns. In contrast, ResNet50 exhibited the lowest performance, which may be attributed to its architectural complexity not being optimally aligned with the characteristics of the dataset. It can be concluded that the transfer learning approach using the VGG16 model is more effective in detecting deepfake images on this dataset. This study also highlights the importance of selecting appropriate architectures and fine-tuning models to the characteristics of the data.