The development of simultaneous localization and mapping (SLAM) technology is crucial for advancing autonomous systems in robotics and navigation. However, camera-based SLAM systems face significant challenges in accuracy, robustness, and computational efficiency, particularly under conditions of environmental variability, dynamic scenes, and hardware limitations. This paper provides a comprehensive review of camera-based SLAM methodologies, focusing on their different approaches for pose estimation, map reconstruction, and camera type. The application of deep learning also will be discussed on how it is expected to improve performance. The objective of this paper is to advance the understanding of camera-based SLAM systems and to provide a foundation for future innovations in robust, efficient, and adaptable SLAM solutions. Additionally, it offers pertinent references and insights for the design and implementation of next-generation SLAM systems across various applications.
Copyrights © 2025