This study aims to optimize the performance of a Convolutional Neural Network (CNN) model based on the MobileNetV2 architecture in classifying Java sparrow images by testing four main parameters: optimizer, learning rate, number of epochs, and batch size. The dataset consists of 800 images divided evenly into four classes. The results show that using the Adam optimizer yields the best accuracy with a training accuracy of 97.50%, validation accuracy of 98.75%, and testing accuracy of 98.13%. A learning rate of 0.001 produces the same results, indicating consistent performance with this configuration. Epoch testing shows that 35 epochs yield the highest performance with a training accuracy of 98.39%, validation accuracy of 100%, and testing accuracy of 98.75%. Meanwhile, batch size testing shows that a batch size of 32 yields the highest testing accuracy of 98.85%, a batch size of 64 yields the highest training accuracy of 98.63%, and a batch size of 128 yields the highest validation accuracy of 99.58%. These findings suggest that smaller batch sizes tend to yield better performance in terms of model generalization, while larger batch sizes provide higher stability in the training process but do not always reflect actual performance on the test data. The results of this study can serve as a reference for selecting parameter configurations to improve the accuracy and generalization of image classification models using MobileNetV2. These results emphasize the importance of proper parameter settings in improving the accuracy and stability of image classification models. They can be a reference in model development in object recognition.