Reinforcement learning (RL) approaches, particularly Q-learning, have emerged as strong tools for autonomous agent training, allowing agents to acquire optimum decision-making rules through interaction with their surroundings. This research investigates the use of Q-learning in the context of training autonomous agents for robotic soccer, a complex and dynamic arena that necessitates strategic planning, coordination, and adaptation. We studied the learning progress and performance of agents taught using Q-learning in a series of experiments carried out in a simulated soccer setting. During training, agents interacted with the soccer environment, iteratively changing their Q-values in response to observable rewards and behaviors. Despite the high-dimensional and stochastic character of the soccer domain, Q-learning helped the agents develop excellent tactics and decision-making capabilities. Notably, our study found that, on average, the agents required 64 steps to reach a stable policy with an average reward of -1.