Claim Missing Document
Check
Articles

Found 2 Documents
Search

Robot indoor navigation: comparative analysis of LiDAR 2D and visual SLAM Messbah, Hind; Emharraf, Mohamed; Saber, Mohammed
IAES International Journal of Robotics and Automation (IJRA) Vol 13, No 1: March 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijra.v13i1.pp41-49

Abstract

Robot indoor navigation has become a significant area of research and development for applications such as autonomous robots, smart homes, and industrial automation. This article presents an in-depth comparative analysis of LiDAR 2D and visual sensor simultaneous localization and mapping (SLAM) approaches for robot indoor navigation. The increasing demand for autonomous robots in indoor environments has led to the development of various SLAM techniques for mapping and localization. LiDAR 2D and visual sensor-based SLAM methods are widely used due to their low cost and ease of implementation. The article provides an overview of LiDAR 2D and visual sensor-based SLAM techniques, including their working principles, advantages, and limitations. A comprehensive comparative analysis is conducted, assessing their capabilities in terms of robustness, accuracy, and computational requirements. The article also discusses the impact of environmental factors, such as lighting conditions and obstacles, on the performance of both approaches. The analysis’s findings highlight each approach’s strengths and weaknesses, providing valuable insights for researchers and practitioners in selecting the appropriate SLAM method for robot indoor navigation based on specific requirements and constraints.
Object detection for indoor mobile robot: deep learning approaches review Messbah, Hind; Emharraf, Mohamed; Saber, Mohamed
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 14, No 5: October 2025
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijai.v14.i5.pp3520-3527

Abstract

Efficient object detection is crucial for enabling autonomous indoor robot navigation. This paper reviews current methodologies and challenges in the field, with a focus on deep learning-based techniques. Methods like you only look once (YOLO), region-based convolutional neural networks (R-CNN), and Faster R-CNN are explored for their suitability in real-time detection in dynamic indoor environments. Deep learning models are emphasized for their ability to improve detection accuracy and adaptability to varying conditions. Key performance metrics such as accuracy, speed, and scalability across different object types and environmental scenarios are discussed. Additionally, the integration of object detection with navigation systems is examined, highlighting the importance of accurate perception for safe and effective robot movement. This study provides insights into future research directions aimed at advancing the capabilities of indoor robot navigation through enhanced deep learning-based object detection techniques.