This study discusses a computational parallelization model that integrates the divide and conquer, pipeline processing, and map-reduce approaches as strategies to improve data processing efficiency and the performance of modern computing systems. The background of this study is based on the growing need to optimize increasingly complex computational processes, particularly in large-scale data processing scenarios and applications that require fast execution times. The literature review covers the fundamental concepts, functions, and main roles of each parallel algorithm in creating more efficient processing structures. The research method adopts a qualitative approach with descriptive analysis, focusing on the interpretation of literature, mapping of operational mechanisms, and comparative evaluation of the three parallelization models within the context of practical implementation. The results indicate that combining the three models can enhance scalability, reduce bottlenecks, and significantly accelerate computation time. This study confirms that the appropriate selection and integration of parallel strategies can effectively support the demands of modern computing in both distributed environments and multi-core architectures.
Copyrights © 2026