This research presents a comprehensive comparative analysis of four neural network architecture optimization methods: Genetic Algorithm (GA), Random Search, Grid Search, and Adaptive Search. Using the MNIST digits dataset, a systematic evaluation was performed based on accuracy, computational efficiency, and architectural complexity. The experimental results demonstrate that the Genetic Algorithm achieved the highest accuracy at 98.33%, while Grid Search demonstrated computational efficiency with the fastest execution time at just 31.06 seconds. Random Search and Adaptive Search showed competitive performance with accuracies of 97.78% and 97.22% respectively, with varying computational requirements. The study revealed that simpler architectures with one or two layers often performed comparably to more complex structures, challenging the common assumption that deeper networks necessarily yield better results. The Genetic Algorithm converged to an optimal single-layer architecture with 119 neurons and ReLU activation, while Adaptive Search explored a more complex three-layer solution. The research identified a non-linear relationship between accuracy gains and computational costs, indicating that substantial increases in computational investment may yield diminishing returns in performance improvement. The convergence patterns of each method provided additional insights, with GA showing steady improvement across generations while Random Search achieved early discovery of good solutions. These findings contribute to both theoretical understanding and practical applications of neural network optimization, offering valuable insights into the trade-offs between methods and practical guidelines for selecting appropriate architecture optimization strategies based on specific requirements for accuracy and computational constraints.
Copyrights © 2025