IAES International Journal of Artificial Intelligence (IJ-AI)
Vol 14, No 5: October 2025

Ensemble reverse knowledge distillation: training robust model using weak models

Reswara, Christopher Gavra (Unknown)
Cenggoro, Tjeng Wawan (Unknown)



Article Info

Publish Date
01 Oct 2025

Abstract

To ensure that artificial intelligence (AI) can be aligned with humans, AI models need to be developed and supervised by humans. Unfortunately, it is possible for an AI to exceed human capabilities, which is commonly referred to as superalignment models. Thus, it raised the question of whether humans can still supervise a superalignment model, which is encapsulated in a concept called weak-to-strong generalization. To address this issue, we introduce ensemble reverse knowledge distillation (ERKD), which leverages two weaker models to supervise a more robust model. This technique is a potential solution for humans to manage a super-alignment of models. ERKD enables a more robust model to achieve optimal performance with the assistance of two weaker models. We tried to train a more robust EfficientNet model with weaker convolutional neural network (CNN) models in a supervised fashion. With this method, the EfficientNet model performed better than the model trained with the standard transfer learning (STL) method. It also performed better than a model that was supervised by a single weaker model. Finally, ERKD-trained EfficientNet models can perform better than EfficientNet models that are one or even two levels stronger.

Copyrights © 2025






Journal Info

Abbrev

IJAI

Publisher

Subject

Computer Science & IT Engineering

Description

IAES International Journal of Artificial Intelligence (IJ-AI) publishes articles in the field of artificial intelligence (AI). The scope covers all artificial intelligence area and its application in the following topics: neural networks; fuzzy logic; simulated biological evolution algorithms (like ...