The rapid growth of big data has significantly increased the computational complexity of machine learning models, particularly due to intensive linear algebra operations that limit scalability and efficiency. This study aims to investigate the effectiveness of Randomized Linear Algebra (RLA) as an acceleration strategy for machine learning in large scale data environments. The research adopts an experimental methodology by integrating randomized techniques such as matrix sketching and random projection into standard machine learning pipelines and evaluating their performance against deterministic baseline approaches. Experiments are conducted on large dimensional datasets using multiple machine learning models, with performance assessed in terms of computational time, memory usage, model accuracy, and scalability. The results demonstrate that the proposed RLA based approach substantially reduces computational cost and memory consumption while maintaining comparable predictive accuracy to conventional methods. These findings indicate that randomized techniques provide an effective trade off between efficiency and accuracy, enabling scalable machine learning for big data applications. In conclusion, this study contributes to the advancement of efficient Artificial Intelligence (AI) systems by demonstrating that RLA can serve as a practical and scalable solution for accelerating machine learning computations in big data contexts, aligning with the growing demand for resource efficient and high performance AI infrastructures.
Copyrights © 2026