This study aims to evaluate the performance of the EfficientNetB1 architecture in recognizing handwritten digits using the MNIST dataset, which consists of 60,000 training images and 10,000 testing images in 28×28 grayscale format. The methodology includes preprocessing steps such as image resizing, grayscale to RGB conversion, pixel normalization, and data augmentation. EfficientNetB1 is used as a feature extractor, followed by dense layers and a softmax output layer for classification. The model is trained using three optimizers—Adam, SGD, and RMSprop—with varying learning rates (0.001, 0.01, and 0.1). Experimental results indicate that the combination of RMSprop and a 0.001 learning rate yields the highest validation accuracy of 97.9%. Classification errors mostly occur on digits with similar visual structures, such as 2 and 5. This research contributes valuable insights into the effective use of EfficientNetB1 and hyperparameter optimization for handwritten digit classification tasks.
Copyrights © 2025