This study aims to analyze the impact of five activation functions ReLU, LeakyReLU, ELU, Sigmoid, and Tanh—on the performance of a Convolutional Neural Network (CNN) model for image classification into three categories: cats, dogs, and wild animals. The evaluation was conducted using validation accuracy metrics, accuracy trends across training epochs, and confusion matrix analysis. The results show that modern activation functions such as LeakyReLU, ELU, and ReLU yield high accuracy and balanced predictions, demonstrating their effectiveness in mitigating vanishing gradient issues and enhancing the model's generalization capability. In contrast, classical functions like Sigmoid and Tanh performed poorly, producing imbalanced predictions and stagnant accuracy Therefore, the choice of activation function plays a critical role in building an optimal CNN model for image classification tasks. This study recommends ReLU-based activation functions, particularly LeakyReLU, as the primary choice for developing multi-class image classification models.
Copyrights © 2025