Diffusion models have achieved remarkable success in generative tasks but remain computationally expensive due to their iterative sampling process. The Denoising Diffusion Implicit Model (DDIM) is one of the popular choices for sampling methods, yet it is still riddled with some drawbacks. DDIM employs a fixed-step schedule that allocates equal computational effort across all noise levels, overlooking the varying difficulty of the denoising process. In this work, we propose Adaptive Timestep Allocation for DDIM, a simple yet effective sampling scheme that dynamically adjusts step sizes based on both noise variance and gradient sensitivity of the denoising network. Our approach allocates larger steps during high-noise sampling stages, where coarse updates are sufficient, and smaller steps during low-noise sampling stages, where detail and intricate parts of the image are critical. This dual adaptation is inspired by insights from signal-to-noise ratio (SNR) analysis and adaptive ODE solvers, requiring no retraining or architectural modifications. We evaluate our method on Stable Diffusion v1.5 and SDXL using MS-COCO captions and DrawBench prompts. Our evaluation shows improvements in Fréchet Inception Distance (FID) and CLIP score, while reducing sampling steps. Our results highlight that principled, adaptive step allocation offers a practical and plug-and-play solution for accelerating diffusion sampling without compromising image quality.
Copyrights © 2025