Sinaga, Novi Novanni
Unknown Affiliation

Published : 1 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 1 Documents
Search

Keamanan Pengenalan Wajah Berbasis Deep Learning: Tinjauan Sistematis Serangan Adversarial dan Strategi Pertahanan (Systematic Literature Review) Syahputra, Fahmy; Sabrina, Elsa; Sitorus, Andika; Lubis, Khodijah May Nuri; Saragi, Frans Jhonatan; Nurrahma, Suci; Sinaga, Novi Novanni
TRILOGI: Jurnal Ilmu Teknologi, Kesehatan, dan Humaniora Vol 6, No 4 (2025)
Publisher : Universitas Nurul Jadid

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.33650/trilogi.v6i4.13424

Abstract

Deep learning–based face recognition is widely adopted due to its strong performance, yet its susceptibility to attacks—particularly adversarial attacks—poses critical risks to the security and reliability of biometric systems. This study presents a Systematic Literature Review (SLR) to synthesize evidence on performance, vulnerabilities, and defense strategies in deep learning–based face recognition. The review follows PRISMA guidelines, including literature retrieval from reputable scholarly sources, deduplication, title/abstract screening, and full-text eligibility assessment based on predefined inclusion and exclusion criteria. Study quality is examined through critical appraisal, and findings are synthesized using thematic analysis, yielding four major themes: (1) model performance and factors influencing accuracy, (2) attack types and their impact on recognition outcomes, (3) defense mechanisms and their effectiveness, and (4) real-world deployment constraints (e.g., illumination, pose, image quality, and identity scale). The synthesis indicates that high accuracy does not necessarily imply high robustness; several defenses (e.g., adversarial training, attack detection, and robust learning) can improve resilience but may introduce trade-offs in computational cost and/or accuracy. This review provides a comparative synthesis and a conceptual model linking accuracy–attacks–defenses, and offers practical recommendations for model selection and security evaluation design. Limitations include heterogeneity in datasets and experimental protocols, inconsistent reporting metrics, and potential publication bias