Indoor reconnaissance missions for unmanned aerial vehicles (UAVs) pose significant challenges in scene reconstruction, mapping, and environmental feature extraction. Relying on a single type of sensor often results in limited accuracy, increased susceptibility to environmental noise, and a lack of comprehensive spatial information. To address these issues, this study proposes a mapping method that combines light detection and ranging (LiDAR) and depth camera data. The method collects data from both LiDAR and a depth camera integrated on the UAV, then performs preprocessing on both data sources to construct local 3D maps using the real-time appearance-based mapping (RTAB-Map) algorithm. Subsequently, the local maps are merged using a filtering method to generate a detailed and complete global map. Real-time experiments conducted on Ubuntu 20.04 using the robot operating system (ROS) Noetic libraries demonstrate that this multi-sensor fusion approach provides richer and more comprehensive environmental information, thereby enhancing the effectiveness of mapping tasks in unknown indoor environments.
Copyrights © 2025