Artificial Neural Networks (ANNs) have demonstrated applicability and effectiveness in several domains, including classification tasks. Significant emphasis has been given to the training techniques of ANNs in identifying appropriate weights and biases. Conventional training techniques such as Gradient Descent (GD) and Backpropagation (BP), while thorough, have several disadvantages such as early convergence, being highly dependent on the initial parameters, and quickly getting stuck in local optima. Conversely, meta-heuristic algorithms show great potential as effective approaches for training ANNs with high computational efficiency, high quality, and global search capabilities. The literature has proposed several such techniques; hence, this paper offers a thorough examination of current advancements in training a Multilayer Perceptron (MLP) neural network using meta-heuristic algorithms, with a focus on classification benchmark datasets. The study was conducted over a period of ten years, from the year 2014 to 2024. The research papers were specifically chosen from four widely used databases: ScienceDirect, Scopus, Springer, and IEEE Xplore. Through the use of a research methodology that incorporates specific criteria for including and excluding articles, and by thorough examination of more than 53 publications, we present a comprehensive study of meta-heuristic methods for training MLPs. Our main focus is on discovering trends across these tools. The analysis has been conducted utilizing relevant factors such as evaluation metrics for classification models, fitness functions, comparing approaches, datasets, and observed outcomes. The present work serves as a significant asset for researchers, facilitating the identification of suitable optimization methodologies for various application areas.