The phenomenon of big data presents distinct challenges in the analysis process, especially when the data contains a very large number of variables. High complexity, potential redundancy, and the risk of overfitting are major issues that must be addressed through dimensionality reduction techniques. Principal Component Analysis (PCA) is a common method effective for data with linear relationships but has limitations in identifying nonlinear patterns. This research aims to improve performance of classification by introducing autoencoder for dealing with nonlinear relationship, data noise, missing values, outliers, and data with various scales. This study employs a quantitative approach through analysis of simulated and empirical data in the form of the Village Development Index from the Central Statistics Agency, which contains variables with various measurement scales. Both dimensionality reduction methods—PCA and neural network-based autoencoders—are tested across various data scenarios. The evaluation is conducted based on their effectiveness in preserving data structure, as well as the Mean Squared Error (MSE) values in the reconstruction process. The results indicate that PCA excels in computational efficiency and accuracy for data with linear relationships. In contrast, the autoencoder demonstrates superior performance in detecting nonlinear patterns, achieving lower Mean Squared Error (MSE) values with stable MSE standard deviations. Additionally, the autoencoder proves to be more robust in handling missing values and outliers compared to PCA. The selection of dimensionality reduction methods highly depends on the characteristics of the analyzed data. Autoencoders represent a superior alternative for handling complex and nonlinear data, although they require model parameter tuning. Further research is recommended to explore the influence of network architecture and training strategies of autoencoders on dimensionality reduction performance.