Any future planetary landing missions, just as demonstrated by Perseverance in 2021 Mars landing mission require advanced guidance, navigation, and control algorithms for the powered landing phase of the spacecraft to touch down a designated target with pinpoint accuracy (circular error precision < 5 m radius). This requires a landing system capable to estimate the craft’s states and map them to certain thrust commands for each craft’s engine. Reinforcement learning theory is used as an approach to manage the mapping guidance algorithm and translate it to engine thrust control commands. This work compares several reinforcement learning based approaches for a powered landing problem of a spacecraft in a two-dimensional (2-D) environment, and identify the advantages/disadvantages of them. Five methods in reinforcement learning, namely Q-Learning, and its extension such as DQN, DDQN, and policy optimization-based such as DDPG and PPO are utilized and benchmarked in terms of rewards and training time needed to land the Lunar Lander. It is found that Q-Learning method produced the highest efficiency. Another contribution of this paper is the use of different discount rates for terminal and shaping rewards, which significantly enhances optimization performance. We present simulation results demonstrating the guidance and control system’s performance in a 2-D simulation environment and demonstrate robustness to noise and system parameter uncertainty.