Presentations have become part of the delivery of information that is very often used. However, in controlling the presentation, they still use a third device to control the presentation, such as the keyboard, mouse, and pointer. This is considered to be less effective like a keyboard and mouse because the presenter has to make a presentation by sitting in front of the screen so that it interferes with the presenter's focus and does not look natural like a pointer. Another solution that can be used as a presentation controller is human hand gestures. This study aims to create a hand gesture detection and recognition system to control presentation applications. This system will issue an output in the form of a simulated keyboard and mouse pressure. To be able to recognize hand gestures, the system uses one of the deep learning methods, namely You Only Look Once version 3 with an NVIDIA Jetson Nano device as a test. This system is designed to detect hand gestures within a distance of 1 to 2.5 meters. The training data used in this system are 7080 images of training data with 8 classes of hand gestures and the training data is taken with a predetermined distance. The results of system testing based on distance produce an average accuracy value of 91.18%. And the average computation time is 0.5988 seconds.
Copyrights © 2021