Real-time face recognition systems encounter a critical trade-off between high-security demands and computational efficiency, particularly when deployed in unconstrained open-set environments. This study presents a comprehensive benchmarking of four distinct deep learning backbones ResNet100, GhostFaceNet, LAFS, and TransFace specifically trained using the Adaptive Margin Loss (AdaFace) function to handle image quality variations. The primary objective is to identify the optimal architecture for secure attendance systems operating on standard hardware with limited training data. The evaluation protocol employs a rigorous real-world open-set test to quantify performance using False Acceptance Rate (FAR) and False Rejection Rate (FRR). The experimental results demonstrate that ResNet100 establishes the highest security standard, achieving a 0.00% FAR at strict thresholds. Meanwhile, GhostFaceNet emerges as the most balanced solution for resource-constrained deployments, delivering competitive accuracy above 93% with significantly lower computational complexity. Conversely, the Vision Transformer (TransFace) fails to generalize in this low-data regime, resulting in unacceptable false acceptance rates. These findings definitively recommend GhostFaceNet for efficient edge-based implementations, while ResNet100 remains the superior choice for mission-critical security applications.
Copyrights © 2025