Indonesian Journal of Electrical Engineering and Computer Science
Vol 34, No 3: June 2024

TQU-HG dataset and comparative study for hand gesture recognition of RGB-based images using deep learning

Van-Dinh Do (Sao DO University)
Van-Hung Le (Tan Trao University)
Huu-Son Do (Tan Trao University)
Van-Nam Phan (Tan Trao University)
Trung-Hieu Te (Tan Trao University)



Article Info

Publish Date
01 Jun 2024

Abstract

Hand gesture recognition has great applications in human-computer interaction (HCI), human-robot interaction (HRI), and supporting the deaf and mute. To build a hand gesture recognition model using deep learning (DL) with high results then needs to be trained on many data and in many different conditions and contexts. In this paper, we publish the TQU-HG dataset of large RGB images with low resolution (640×480) pixels, low light conditions, and fast speed (16 fps). TQU-HG dataset includes 60,000 images collected from 20 people (10 male, 10 female) with 15 gestures of both left and right hands. A comparative study with two branches: i) based on Mediapipe TML and ii) Based on convolutional neural networks (CNNs) (you only look once (YOLO); YOLOv5, YOLOv6, YOLOv7, YOLOv8, YOLO-Nas, single shot multiBox detector (SSD) VGG16, residual network (ResNet)18, ResNext50, ResNet152, ResNext50, MobileNet V3 small, and MobileNet V3 large), the architecture and operation of CNNs models are also introduced in detail. We especially fine-tune the model and evaluate it on TQU-HG and HaGRID datasets. The quantitative results of the training and testing are presented (F1-score of YOLOv8, YOLO-Nas, MobileNet V3 small, ResNet50 is 98.99%, 98.98%, 99.27%, 99.36%, respectively on the TQU-HG dataset and is 99.21%, 99.37%, 99.36%, 86.4%, 98.3%, respectively on the HaGRID dataset). The computation time of YOLOv8 is 6.19 fps on the CPU and 18.28 fps on the GPU.

Copyrights © 2024