This study examines how to optimize a Support Vector Machine (SVM) model using a dimensionality reduction method called Principal Component Analysis (PCA) to classify images with multiple dimensions. The dataset used is Chessman images with an initial number of features of 12,288. PCA was applied with the aim of retaining 99% of the total variation, resulting in 312 principal components. The results show a significant improvement in computational efficiency: training time was drastically reduced from 29.85 seconds to just 0.17 seconds (168 times faster), and memory usage decreased from 25.83 MB to 0.66 MB (97% more efficient). Although the accuracy experienced a small decrease, namely from 31.58% to 31.22%, PCA still functions as a noise filter that helps improve performance, especially in classes with complex visual patterns, such as an increase in the F1-score of the "Rook" class from 0.32 to 0.37. The conclusions of this study indicate that PCA provides important efficiency improvements without significantly sacrificing classification performance.
Copyrights © 2026