This research explores the optimization of Convolutional Neural Networks (CNNs) for image classification through a numerical experiment. A simplified CNN architecture is trained on a small dataset comprising 100 randomly generated images with a resolution of 28×28. The model incorporates key components such as convolutional layers, batch normalization, max-pooling, and dense layers. Training involves 10 epochs using the Adam optimizer and sparse categorical cross-entropy loss. The results reveal promising training accuracy of 85%, but the validation accuracy, a crucial metric for generalization, lags at 60%. The discussion emphasizes the limitations of the small and synthetic dataset, underscoring the importance of real-world, diverse datasets for meaningful experimentation. The example serves as a foundation for understanding CNN training dynamics, with implications for refining models in more realistic image classification scenarios. The conclusion calls for future research to focus on advanced techniques, larger datasets, and comprehensive validation processes to enhance the reliability and applicability of CNN models in practical applications.