Claim Missing Document
Check
Articles

Found 2 Documents
Search

A Secured, Multilevel Face Recognition based on Head Pose Estimation, MTCNN and FaceNet Dang, Thai-Viet; Tran, Hoai-Linh
Journal of Robotics and Control (JRC) Vol 4, No 4 (2023)
Publisher : Universitas Muhammadiyah Yogyakarta

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.18196/jrc.v4i4.18780

Abstract

Artificial Intelligence and IoT have always attracted a lot of attention from scholars and researchers because of their high applicability, which make them a typical technology of the Fourth Industrial Revolution. The hallmark of AI is its self-learning ability, which enables computers to predict and analyze complex data such as bio data (fingerprints, irises, and faces), voice recognition, text processing. Among those application, the face recognition is under intense research due to the demand in users’ identification. This paper proposes a new, secured, two-step solution for an identification system that uses MTCNN and FaceNet networks enhanced with head pose estimation of the users. The model's accuracy ranges from 92% to 95%, which make it competitive with recent research to demonstrate the system's usability.
An Ultra Fast Semantic Segmentation Model for AMR’s Path Planning Tran, Hoai-Linh; Dang, Thai-Viet
Journal of Robotics and Control (JRC) Vol 4, No 3 (2023)
Publisher : Universitas Muhammadiyah Yogyakarta

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.18196/jrc.v4i3.18758

Abstract

Computer vision plays a significant role in mobile robot navigation due to the abundance of information extracted from digital images. On the basis of the captured images, mobile robots determine their location and proceed to the desired destination. Obstacle avoidance still requires a complex sensor system with a high computational efficiency requirement due to the complexity of the environment. This research provides a real-time solution to the issue of extracting corridor scenes from a single image. Using an ultra-fast semantic segmentation model to reduce the number of training parameters and the cost of computation. In addition, the mean Intersection over Union (mIoU) is 89%, and the high accuracy is 95%. To demonstrate the viability of the prosed method, the simulation results are contrasted to those of contemporary techniques. Finally, the authors employ the segmented image to construct the frontal view of the mobile robot in order to determine the available free areas for mobile robot path planning tasks.