Currently, the use of Deep Learning is widespread across various domains, with Convolutional Neural Networks (CNNs) as one of its main pioneers due to the principle of convolution. Recent methods continue to emerge with steadily increasing accuracy, in some cases approaching perfection. However, their implementation is often limited by the lack of sufficient computational resources in many environments. Moreover, the growing demand for explainable AI compels researchers to explore approaches that reveal the inner workings of deep learning models rather than treating them as mere black boxes. In this study, a simple CNN model is employed as a testbed for examining the feature extraction process through convolution, which is subsequently transformed into a user-friendly two-dimensional representation. The dataset used in this study is the Cats and Dogs dataset from Kaggle, which contains 25,000 labeled images equally distributed between the two classes. The dimensionality reduction methods utilized include Principal Component Analysis (PCA), t-distributed Stochastic Neighbor Embedding (t-SNE), and Uniform Manifold Approximation and Projection (UMAP). The results demonstrate that UMAP achieves superior performance compared to PCA and t-SNE, with the highest silhouette score and a lower Davies–Bouldin index, indicating more compact and well-separated feature clusters.
Copyrights © 2025