In presentation activities, the use of physical devices such as a mouse or remote often limits the presenter’s mobility and reduces the effectiveness of interaction with the audience. This study aims to implement a hand gesture recognition system as an alternative solution to control presentation slides in real-time without additional devices. The system was developed using the MediaPipe framework for hand landmark detection, OpenCV for video image processing, and a Long Short-Term Memory (LSTM) model for sequential gesture classification. Three main gestures were defined as commands, namely “Next,” “Previous,” and “Idle,” with input taken from live video streaming at a distance of 1–3 meters.The development process included manual labeling of gesture data from multiple users, training the LSTM model with sequential data, and testing the system in real-time integrated with Microsoft PowerPoint. Experimental results indicate that the system successfully recognized hand gestures with high accuracy across most scenarios, with optimal performance observed at distances of 1–2 meters. However, accuracy decreased under low-light conditions or when gestures were performed too quickly.These findings demonstrate that the combination of MediaPipe, OpenCV, and LSTM is effective in building a gesture-based presentation control system. Beyond enhancing flexibility and interactivity in presentations, this research also contributes to the development of more natural and practical human–computer interaction systems, while offering opportunities for broader applications in other domains
Copyrights © 2025