Robot navigation systems are based on Simultaneous Localization and Mapping (SLAM) and obstacle avoidance. We construct maps for the robot using computer vision methods requiring high repeatability for consistent feature tracking. Also, the obstacle avoidance method needs an efficient tool for fusing data from multiple sensors. This research enhances SLAM accuracy and obstacle avoidance using advanced visual processing and dy namic neural fields (DNF). We propose two key methods: (1) an enhanced multiscale Harris detector using steerable filters for robust feature extrac tion, achieving around 90% repeatability; and (2) a dynamic neural field algorithm that predicts the optimal heading angle by integrating visual de scriptors and LIDAR data. The first method’s experimental results show that the new feature detector achieves high accuracy, outperforming exist ing methods. Its invariance to the orientation of the image makes it insen sitive to the rotations of the robot. We applied it to the monocular SLAM and remarked that the positions of the robot were computed precisely. In the second method, the results showed that the dynamic neural fields algo rithm ensures efficient obstacle avoidance by fusing the gist of the image and LIDAR data, resulting in more accurate and consistent navigation than laser-only methods. In conclusion, the study presents significant advance ments in robot navigation through robust feature detection for SLAM and effective obstacle avoidance using dynamic neural fields. These advance ments significantly enhance precision and reliability in robot navigation, paving the way for future innovations in autonomous robotic applications.