Developing reliable real-time navigation systems for visually impaired individuals remains challenging, particularly in dynamic and low-light environments. This study proposes an integrated framework combining YOLOv8, OpenCV-based monocular distance estimation, and RGB–NIR multispectral imaging to enhance detection robustness and distance awareness. A dataset of 1,700 annotated images collected from diverse indoor and outdoor environments was used for training and evaluation using preprocessing techniques such as resizing, normalization, and data augmentation. System performance was evaluated using Precision, Recall, F1-Score, mean Average Precision (mAP), Mean Absolute Error (MAE), Root Mean Squared Error (RMSE), and Frames Per Second (FPS). Experimental results show that YOLOv8x achieved the best performance with an F1-Score of 0.91, mAP@50 of 0.74, MAE of 0.15 m, RMSE of 0.20 m, and a processing speed of 22 FPS. Multispectral RGB–NIR integration further improved low-light performance, increasing the F1-Score from 0.83 to 0.89 and reducing MAE from 0.28 m to 0.19 m with only a minor reduction in speed. These findings demonstrate that the proposed system provides an effective balance between accuracy and real-time performance for assistive navigation applications.
Copyrights © 2026