This research proposes a CNN-based Deep Q-Network (CNN-DQN) model to enhance the navigation capabilities of autonomous vehicles in complex urban environments. The model integrates CNN for spatial abstraction with reinforcement learning to enable end-to-end decision-making based on high-dimensional sensor data. The primary objective is to evaluate the impact of CNN-DQN state abstraction on the quality and stability of the resulting policy. Using a grid-based simulator, the agent is trained on a synthetic dataset representing urban traffic scenarios. The CNN-DQN model consistently outperforms standard DQN in multiple metrics: cumulative reward increased by 14.3%, loss convergence accelerated by 22%, and mean absolute error (MAE) reduced to 0.028. Furthermore, the model achieved a Pearson correlation coefficient of 0.94 in predicted actions and demonstrated superior robustness under Gaussian noise perturbation, with reward loss limited to 6.18% compared to 18.7% in the baseline. Visualizations of CNN feature maps reveal spatial attention patterns that support efficient path planning. The action symmetry index confirms that the CNN-DQN agent exhibits consistent left-right decision behavior, validating its policy regularity. The novelty of this study lies in its combined use of deep spatial encoding and value-based reinforcement learning for structured, rule-based environments with real-time control implications. These findings indicate that CNN-enhanced reinforcement learning architectures can significantly improve autonomous navigation performance and robustness in dynamic urban settings.
Copyrights © 2025