One of the essential medical imaging tasks for early diagnosis and treatment planning is categorizing lung diseases from chest X-ray (CXR) images. This work constructs a strong ensemble learning platform on a variety of deep models for boosting diagnosis performance to detect and identify lung disease. Three prtrianed CNN models InceptionV3, ResNet50, and EfficientNetV2M were trained on a CXR dataset, motivated by the complementary architectural features and the success demonstrated in medical imaging problems, such as chest X-rays. These three networks belong to different families of the CNNs and therefore make different contributions for diversity and stability in the ensemble. The models were then ensembled in two methods: averaging (soft voting) and bagging with hard voting (maximum bootstrap aggregation) in the first method. Various sets of pre-trained models were experimented with for the averaged ensemble. According to experimental results, the soft voting (averaged) ensemble between EfficientNetV2M and InceptionV3 performed better than the other models' combinations and achieved the highest accuracy of 93.00% in classification. This was followed by the combination of EfficientNetV2M and ResNet50 with an accuracy of 92.09%, then InceptionV3 and ResNet50 with a value of 91.75%, and the complete ensemble of the three models with an accuracy of 92.14%.The bagging hard voting strategy was somewhat with lower accuracy, but the InceptionV3 based bagging ensemble attained 90.56%, EfficientNetV2M attained 91.00%, and ResNet50 attained 88.00%. It is evident from the results that soft voting strategy, InceptionV3 and EfficientNetV2M ensemble provides the best optimal and stable classification performance among all the configurations that were attempted. The study proves that ensemble learning improves the accuracy of lung disease classification models, and choosing the right architectures is essential, with EfficientNetV2M and InceptionV3 showing improved performance, resulting in early diagnosis and improved patient outcomes.
Copyrights © 2025