Ensemble learning has been a cornerstone of machine learning, providing improved predictive performance and robustness by combining multiple models. However, in the era of deep learning, the landscape of ensemble techniques has rapidly evolved, influenced by advances in neural architectures, training models, and practical application requirements. This review provides a state-of-the-art survey of ensemble deep learning approaches, focusing on recent developments of ensemble methods. We introduce a classification of ensemble strategies based on model diversity, fusion mechanisms, and task alignment, and highlight emerging techniques such as attention-based ensemble fusion, neural architecture search-based ensembles, and large ensembles of language or vision models. The review also examines theoretical foundations, practical tradeoffs, and domain-specific adaptations in some fields. Compiling state-of-the-art benchmarks, we evaluate ensemble performance in terms of accuracy, efficiency, robustness, and interpretability. We also identify key challenges such as scalability, overfitting, and deployment limitations and present open research directions, including ensemble learning for continuous learning, federated learning, and learning from scratch. By connecting key insights with current trends, this review aims to guide researchers and practitioners in designing and implementing ensemble deep learning systems to address the next generation of AI challenges.
Copyrights © 2026