The fast expansion of connected devices has led to an unparalleled increase in data across sectors like industrial automation, social media, environmental monitoring, and life sciences. The processing of this data presents difficulties owing to its magnitude, temporal urgency, and security stipulations. Computation offloading has arisen as a viable alternative, allowing resource-constrained devices to assign demanding work to more robust platforms, thus improving responsiveness and efficiency. This paper examines decision-making strategies for computing offloading by assessing various algorithms, including a deep neural network with deep reinforcement learning (DNN-DRL), coordinate descent (baseline), AdaBoost, and K-nearest neighbor (KNN). The performance evaluation centers on three primary metrics: system accuracy, training duration, and latency. The computation offloading mitigates these issues by transferring intricate workloads from resource-limited devices to more proficient platforms, thus enhancing efficiency and responsiveness. The evaluation examines accuracy, training duration, and latency as key parameters. The results indicate that KNN attains maximum accuracy and minimal latency, AdaBoost provides a robust balance despite increased training costs, and the baseline underperforms in both efficiency and responsiveness. These findings underscore the trade-offs between computational expense, precision, and real-time application, providing insights for forthcoming IoT and edge-computing systems.