Face gender recognition plays a critical role in applications such as security systems, personalized services, and human-computer interaction. Although VGG-16 is commonly used in this domain, it struggles to retain important spatial information under varying lighting conditions, facial expressions, and viewing angles. This study enhances the VGG-16 model by integrating the Convolutional Block Attention Module (CBAM), which consists of spatial and channel attention mechanisms. Several training scenarios were explored, including applying CBAM to all convolutional blocks and fine-tuning blocks 2 to 5. Experiments conducted on the Labeled Faces in the Wild (LFW) Gender dataset showed a notable improvement in performance. The best configuration achieved an accuracy of 91.78%, outperforming the baseline model (82.13%–88.72%). Other evaluation metrics such as Precision, Recall, and F1-Score also improved, confirming the effectiveness of attention mechanisms in enhancing feature extraction and classification accuracy in face gender recognition tasks.
Copyrights © 2026