Optimization problems in computer science often arise when a system must select the best combination from several alternatives under limited resources such as capacity, time, or cost. One commonly used optimization model is the Knapsack Problem, which involves selecting a number of items with specific weights and values to obtain the maximum profit without exceeding the available capacity. This study aims to analyze and compare the performance of the Greedy algorithm and Dynamic Programming in solving the 0–1 Knapsack Problem. The research employs a quantitative experimental approach by implementing both algorithms in a computer program and testing them on several datasets with different sizes. The evaluation parameters include the maximum value obtained and the algorithm execution time. The results show that the Greedy algorithm has faster execution time and more efficient memory usage, but it does not always produce an optimal solution. In contrast, the Dynamic Programming algorithm consistently produces an optimal solution, although it requires greater computational time. Therefore, the choice of algorithm should be adjusted to system requirements, whether prioritizing computational efficiency or optimal solution quality.
Copyrights © 2026