Samad Dadvandipour
University of Miskolc

Published : 2 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 2 Documents
Search

Effect of filter sizes on image classification in CNN: a case study on CFIR10 and Fashion-MNIST datasets Owais Mujtaba Khanday; Samad Dadvandipour; Mohd Aaqib Lone
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 10, No 4: December 2021
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijai.v10.i4.pp872-878

Abstract

Convolution neural networks (CNN or ConvNet), a deep neural network class inspired by biological processes, are immensely used for image classification or visual imagery. These networks need various parameters or attributes like number of filters, filter size, number of input channels, padding stride and dilation, for doing the required task. In this paper, we focused on the hyperparameter, i.e., filter size. Filter sizes come in various sizes like 3×3, 5×5, and 7×7. We varied the filter sizes and recorded their effects on the models' accuracy. The models' architecture is kept intact and only the filter sizes are varied. This gives a better understanding of the effect of filter sizes on image classification. CIFAR10 and FashionMNIST datasets are used for this study. Experimental results showed the accuracy is inversely proportional to the filter size. The accuracy using 3×3 filters on CIFAR10 and Fashion-MNIST is 73.04% and 93.68%, respectively.
Analysis of machine learning algorithms for character recognition: a case study on handwritten digit recognition Owais Mujtaba Khandy; Samad Dadvandipour
Indonesian Journal of Electrical Engineering and Computer Science Vol 21, No 1: January 2021
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijeecs.v21.i1.pp574-581

Abstract

This paper covers the work done in handwritten digit recognition and the various classifiers that have been developed. Methods like MLP, SVM, Bayesian networks, and random forests were discussed with their accuracy and are empirically evaluated. Boosted LetNet 4, an ensemble of various classifiers, has shown maximum efficiency among these methods.