Creating high-quality 2D and 3D assets is essential for digital content, but inefficient scheduling and inaccurate time estimates often hamper the rendering process. Traditional methods, which assume rendering time is directly proportional to frame count, fail to account for variations in scene complexity, resulting in severe estimation errors averaging 97.0% across all tasks. We propose a Hybrid Round-Robin Scheduler (HRRS) that intelligently manages batch rendering tasks through complexity-aware classification. Our method first categorizes tasks by complexity (Low, Medium, High) and routes them to appropriate queues with tiered quantum allocations. It then employs non-linear time estimation models and dynamically adjusts processing priorities based on real-time performance metrics. We evaluated our scheduler against standard algorithms—First-Come-First-Served (FCFS), Shortest Job First (SJF), and Round Robin (RR)—using 21 diverse rendering tasks with frame counts ranging from 10 to 420 frames. The results demonstrate that our approach reduces average waiting time by 45.9% (from 29.63s to 16.02s) and cuts bottleneck-induced delays by 78% (from 41s to 9s), while maintaining optimal CPU utilization at 85% and limiting context switches to only nine occurrences. A key finding reveals that complexity, rather than frame count, is the primary driver of processing time; high-complexity tasks required significantly longer processing (averaging 238.27 seconds) compared to medium-complexity tasks (averaging 34.52 seconds), representing a 6.9-fold performance differential. Our hybrid framework effectively overcomes the primary limitations of existing algorithms: it prevents bottlenecks from large tasks (FCFS), avoids the parallelism issues of SJF, and minimizes the performance overhead from frequent switching in Round Robin. This work provides a robust foundation for intelligent resource allocation in cloud rendering environments where task demands are variable and difficult to predict, establishing that effective scheduling requires complexity-aware algorithms rather than universal approaches.