Zakaria, Zaki
Unknown Affiliation

Published : 1 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 1 Documents
Search

Scalability and Efficiency: A Comparative Study of Face Recognition Technologies Zakaria, Zaki; Misinem, Misinem; Sopiah , Nyimas; Efrizoni , Lusiana
International Journal of Advances in Artificial Intelligence and Machine Learning Vol. 1 No. 1 (2024): International Journal of Advances in Artificial Intelligence and Machine Learni
Publisher : CV Media Inti Teknologi

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.58723/ijaaiml.v1i1.296

Abstract

This article addresses the challenge of selecting the most effective machine learning algorithm for face recognition tasks, a common problem in academic research and practical applications. To tackle this issue, we conducted a comparative analysis of five widely used algorithms: Linear Discriminant Analysis (LDA), Logistic Regression, Naive Bayes, K-Nearest Neighbors (KNN), and Support Vector Machine (SVM). The study involved implementing each algorithm on a standardized dataset, followed by a rigorous evaluation of their performance based on accuracy metrics. The results revealed that LDA, Logistic Regression, and SVM significantly outperformed the other models, each achieving an impressive accuracy of 97%. This high accuracy indicates that these algorithms are well-suited for handling datasets with linearly separable classes. Naive Bayes also showed a strong performance with 90% accuracy, proving effective under the feature independence assumption. However, KNN lagged, with an accuracy of 70%, highlighting its sensitivity to data scale and local structure, which affects its applicability in larger datasets or real-time scenarios. The findings suggest that while LDA, Logistic Regression, and SVM are optimal for datasets with clear class distinctions, the choice of an algorithm should still be guided by specific data characteristics and computational constraints. This study underscores the necessity for carefully considering each algorithm’s strengths and limitations, ensuring that the selected model aligns with the unique demands of the application. Future work could explore ensemble methods and advanced parameter tuning further to enhance the performance and robustness of these models.