The increasing complexity of machine learning algorithms is often accompanied by higher computational costs particularly when dealing with high-dimensional data. This condition poses significant challenges in terms of computational efficiency and resource utilization. One mathematical approach that can address this issue is the application of linear algebra concepts, specifically matrix decomposition techniques. This study aims to apply matrix decomposition methods to reduce computational complexity in machine learning algorithms without significantly degrading model performance. The proposed approach employs matrix decomposition, such as Singular Value Decomposition (SVD), during the data preprocessing and model training stages. The performance of the algorithms is evaluated by comparing their behavior before and after the application of matrix decomposition in terms of computational time, accuracy, and memory efficiency. The experimental results demonstrate that matrix decomposition can significantly reduce computational complexity and improve learning efficiency, while maintaining stable or only slightly reduced accuracy. These findings indicate that matrix decomposition is an effective and practical approach for optimizing machine learning algorithms, particularly for large-scale and high-dimensional datasets.
Copyrights © 2026