The FaceNet model is one of the deep learning-based face recognition methods capable of transforming facial images into feature vectors (embedding) that represent the unique identity of each individual. In previous studies, this model is often combined with classification methods such as Support Vector Machine (SVM) or K-Nearest Neighbor (K-NN). Although accurate, these approaches require high computation and complex inference processes, making them less suitable for applications that require fast response and efficiency, such as real-time attendance systems. This research proposes an alternative approach using cosine similarity to compare similarity between face vectors. Cosine similarity measures the similarity of two vectors based on the angle between them, with values ranging from 0 (not similar) to 1 (identical). The system was developed by combining FaceNet and cosine similarity models, without any additional classification. Test results showed that faces registered in the system produced cosine similarity values between 0.83 and 0.96 (closer to 1 indicates a high match), with an average of 0.90, while unregistered faces had values between 0.42 and 0.67, with an average of 0.53. By setting the threshold at 0.7, the system successfully differentiated between recognized and unrecognized faces with 100% accuracy on 30 respondents. This approach significantly reduces the computational burden, enables implementation on devices with limited specifications, and provides a practical and accurate solution for face recognition-based digital attendance systems.