Internet of Things (IoT) deployments span heterogeneous infrastructure such as edge devices, fog nodes, and cloud servers, each with distinct computational capacity, energy constraints, and cost profiles. Scheduling across this three-tier stack requires satisfying four competing demands, including latency bounds, energy budgets, workload distribution, and cloud offloading cost. None of these can be optimized in isolation, and workload variability across deployment sites makes the problem even harder. In this paper, we review task scheduling strategies in edge-fog-cloud environments. It compares heuristic, metaheuristic, and machine learning-based approaches across deployment settings, adaptation capacity, and measured performance. Findings reveal metaheuristic methods reduce MK and energy consumption; learning-based approaches improve latency and task success rates, though under narrower conditions. Yet widespread reliance on simulation‑based evaluation and task-independence assumptions limits what these results actually demonstrate. Fixed objective weighting, unvalidated scalability, missing workflow dependency support, and static priority schemes each constrain deployment in practice. Future research should therefore prioritize shared or validated testbeds, workflow-aware/dependencies scheduling formulations, variable objective priorities, and scalability studies beyond small-to-medium topologies. Our study establishes a basis for designing scheduling strategies that hold under real deployment conditions across IoT, fog, and cloud applications and production settings.
Copyrights © 2026