Computer vision plays a significant role in mobile robot navigation due to the abundance of information extracted from digital images. On the basis of the captured images, mobile robots determine their location and proceed to the desired destination. Obstacle avoidance still requires a complex sensor system with a high computational efficiency requirement due to the complexity of the environment. This research provides a real-time solution to the issue of extracting corridor scenes from a single image. Using an ultra-fast semantic segmentation model to reduce the number of training parameters and the cost of computation. In addition, the mean Intersection over Union (mIoU) is 89%, and the high accuracy is 95%. To demonstrate the viability of the prosed method, the simulation results are contrasted to those of contemporary techniques. Finally, the authors employ the segmented image to construct the frontal view of the mobile robot in order to determine the available free areas for mobile robot path planning tasks.
Copyrights © 2023