Dynamic programming (DP) is a foundational technique for solving multistage optimization problems; however, its classical formulation often incurs a cubic time complexity, rendering it impractical for large-scale real-world applications. This study proposes an accelerated dynamic programming framework that leverages quadrilateral inequality and convex monotonicity theory to significantly reduce computational overhead. By formally establishing the structural conditions under which DP decision boundaries exhibit monotone behavior, the proposed method eliminates redundant state transitions and maintains a sequence of admissible decisions through an efficient stack-based mechanism. The resulting algorithm achieves a worst-case complexity of O(n2) and can be further optimized to O(n) when convex monotonicity holds, representing a substantial improvement over traditional O (n3) techniques. Experimental validation using the classical stone-merging problem demonstrates that the accelerated method executes more than 60 times faster than the standard approach for large inputs, while maintaining identical optimal results. These findings confirm that embedding geometric regularities into DP formulations not only enhances performance but also broadens the applicability of dynamic programming to computation-intensive optimization tasks. This research contributes a mathematically grounded, scalability-oriented solution for accelerating dynamic programming, with implications for logistics, production planning, data segmentation, and other large-scale decision systems.