Barren plateaus (BP) remain a core challenge in training quantum neural networks (QNN), where gradient vanishing hinders convergence. This paper proposes a layerwise quantum training (LQT) strategy, which trains parameterized quantum circuits (PQC) incrementally by optimizing each layer separately. Our approach avoids deep circuit initialization by gradually constructing the QNN. Experimental results demonstrate that LQT mitigates the onset of barren plateaus and enhances convergence rates compared to conventional and residual-based QNN, rendering it a scalable alternative for Noisy Intermediate-Scale Quantum (NISQ)-era quantum devices.
Copyrights © 2025