Indonesian Journal of Electrical Engineering and Computer Science
Vol 40, No 2: November 2025

A systematic evaluation of pre-trained encoder architectures for multimodal brain tumor segmentation using U-Net-based architectures

Abbas, Marwa (Unknown)
Khalaf, Ashraf A. M. (Unknown)
Mogahed, Hussein (Unknown)
Hussein, Aziza I. (Unknown)
Gaber, Lamya (Unknown)
Mabrook, M. Mourad (Unknown)



Article Info

Publish Date
01 Nov 2025

Abstract

Accurate brain tumor segmentation from medical imaging is critical for early diagnosis and effective treatment planning. Deep learning methods, particularly U-Net-based architectures, have demonstrated strong performance in this domain. However, prior studies have primarily focused on limited encoder backbones, overlooking the potential advantages of alternative pretrained models. This study presents a systematic evaluation of twelve pretrained convolutional neural networks—ResNet34, ResNet50, ResNet101, VGG16, VGG19, DenseNet121, InceptionResNetV2, InceptionV3, MobileNetV2, EfficientNetB1, SE-ResNet34, and SE-ResNet18—used as encoder backbones in the U-Net framework for identification and extraction of tumor-affected brain areas using the BraTS 2019 multimodal MRI dataset. Model performance was assessed through cross-validation, incorporating fault detection to enhance reliability. The MobileNetV2-based U-Net configuration outperformed all other architectures, achieving 99% cross-validation accuracy and 99.3% test accuracy. Additionally, it achieved a Jaccard coefficient of 83.45%, and Dice coefficients of 90.3% (Whole Tumor), 86.07% (Tumor Core), and 81.93% (Enhancing Tumor), with a low-test loss of 0.0282. These results demonstrate that MobileNetV2 is a highly effective encoder backbone for U-Net in extracting tasks for tumor-affected brain regions using multimodal medical imaging data.

Copyrights © 2025