Claim Missing Document
Check
Articles

Found 2 Documents
Search
Journal : Journal of Robotics and Control (JRC)

Towards Controlling Mobile Robot Using Upper Human Body Gesture Based on Convolutional Neural Network Fuad, Muhammad; Umam, Faikul; Wahyuni, Sri; Fahriani, Nuniek; Nurwahyudi, Ilham; Darwaman, Mochammad Ilham; Maulana, Fahmi
Journal of Robotics and Control (JRC) Vol 4, No 6 (2023)
Publisher : Universitas Muhammadiyah Yogyakarta

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.18196/jrc.v4i6.20399

Abstract

Human-Robot Interaction (HRI) has challenges in investigation of a nonverbal and natural interaction. This study contributes to developing a gesture recognition system capable of recognizing the entire human upper body for HRI, which has never been done in previous research. Preprocessing is applied to improve image quality, reduce noise and highlight important features of each image, including color segmentation, thresholding and resizing. The hue, saturation, value (HSV) color segmentation is executed by utilizing blue color backdrop and additional lighting to deal with illumination issue. Then thresholding is performed to get a black and white image to distinguish between background and foreground. The resizing is completed to adjust the image to match the size expected by the model. The preprocessed data image is used as input for gesture recognition based on Convolutional Neural Network (CNN). This study recorded five gestures from five research subjects in difference gender and body posture with total of 450 images which divided into 380 and 70 images for training and testing respectively. Experiments that performed in an indoor environment showed that CNN achieved 92% of accuracy in the gesture recognition. It has lower level of accuracy compare to AlexNet model but with faster training computation time of 9 seconds. This result was obtained by testing the system over various distances. The optimal distance for a camera setting from user to interact with mobile robot by using gesture was 2.5 m. For future research, the proposed method will be improved and implemented for mobile robot motion control.
Obstacle Avoidance Based on Stereo Vision Navigation System for Omni-directional Robot Umam, Faikul; Fuad, Muhammad; Suwarno, Iswanto; Ma'arif, Alfian; Caesarendra, Wahyu
Journal of Robotics and Control (JRC) Vol 4, No 2 (2023)
Publisher : Universitas Muhammadiyah Yogyakarta

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.18196/jrc.v4i2.17977

Abstract

This paper addresses the problem of obstacle avoidance in mobile robot navigation systems. The navigation system is considered very important because the robot must be able to be controlled from its initial position to its destination without experiencing a collision. The robot must be able to avoid obstacles and arrive at its destination. Several previous studies have focused more on predetermined stationary obstacles. This has resulted in research results being difficult to apply in real environmental conditions, whereas in real conditions, obstacles can be stationary or moving caused by changes in the walking environment. The objective of this study is to address the robot’s navigation behaviors to avoid obstacles. In dealing with complex problems as previously described, a control system is designed using Neuro-Fuzzy so that the robot can avoid obstacles when the robot moves toward the destination. This paper uses ANFIS for obstacle avoidance control. The learning model used is offline learning. Mapping the input and output data is used in the initial step. Then the data is trained to produce a very small error. To support the movement of the robot so that it is more flexible and smoother in avoiding obstacles and can identify objects in real-time, a three wheels omnidirectional robot is used equipped with a stereo vision sensor. The contribution is to advance state of the art in obstacle avoidance for robot navigation systems by exploiting ANFIS with target-and-obstacles detection based on stereo vision sensors. This study tested the proposed control method by using 15 experiments with different obstacle setup positions. These scenarios were chosen to test the ability to avoid moving obstacles that may come from the front, the right, or the left of the robot. The robot moved to the left or right of the obstacles depending on the given Vy speed. After several tests with different obstacle positions, the robot managed to avoid the obstacle when the obstacle distance ranged from 173 – 150 cm with an average speed of Vy 274 mm/s. In the process of avoiding obstacles, the robot still calculates the direction in which the robot is facing the target until the target angle is 0.