Managing extensive agent swarms presents a significant difficulty in dynamic, real-time situations, especially in gaming artificial intelligence, such as real-time strategy. Traditional Particle Swarm Optimization (PSO) techniques, while effective for optimization tasks, often exhibit suboptimal convergence and inadequate flexibility in complex, demanding situations. This study introduces an innovative hybrid approach that integrates Reinforcement Learning (RL) with PSO to create an adaptive swarm clustering system. This approach employs a Deep Deterministic Policy Gradient (DDPG) agent to dynamically modify PSO parameters, enabling the swarm to adeptly maneuver and cluster within a procedurally generated 2D simulation environment featuring physical obstacles, in contrast to earlier studies that depend on static mathematical benchmarks. A rigorous quantitative analysis using Mixed Linear Model Regression (MLMR) demonstrates that this hybrid method significantly and statistically outperforms conventional, manually tuned PSO in terms of convergence time and diversity value. For example, the RLGPSO model achieved an 11.46% reduction in convergence time on high-complexity maps, a result confirmed as statistically significant with a p-value of 0.002 from the MLMR analysis. This study offers a pragmatic approach for the implementation of intelligent, self-organizing agent swarms, directly applicable to improving the realism and efficacy of present-day gaming AI.
Copyrights © 2026