Mogahed, Hussein
Unknown Affiliation

Published : 1 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 1 Documents
Search

A systematic evaluation of pre-trained encoder architectures for multimodal brain tumor segmentation using U-Net-based architectures Abbas, Marwa; Khalaf, Ashraf A. M.; Mogahed, Hussein; Hussein, Aziza I.; Gaber, Lamya; Mabrook, M. Mourad
Indonesian Journal of Electrical Engineering and Computer Science Vol 40, No 2: November 2025
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijeecs.v40.i2.pp850-859

Abstract

Accurate brain tumor segmentation from medical imaging is critical for early diagnosis and effective treatment planning. Deep learning methods, particularly U-Net-based architectures, have demonstrated strong performance in this domain. However, prior studies have primarily focused on limited encoder backbones, overlooking the potential advantages of alternative pretrained models. This study presents a systematic evaluation of twelve pretrained convolutional neural networks—ResNet34, ResNet50, ResNet101, VGG16, VGG19, DenseNet121, InceptionResNetV2, InceptionV3, MobileNetV2, EfficientNetB1, SE-ResNet34, and SE-ResNet18—used as encoder backbones in the U-Net framework for identification and extraction of tumor-affected brain areas using the BraTS 2019 multimodal MRI dataset. Model performance was assessed through cross-validation, incorporating fault detection to enhance reliability. The MobileNetV2-based U-Net configuration outperformed all other architectures, achieving 99% cross-validation accuracy and 99.3% test accuracy. Additionally, it achieved a Jaccard coefficient of 83.45%, and Dice coefficients of 90.3% (Whole Tumor), 86.07% (Tumor Core), and 81.93% (Enhancing Tumor), with a low-test loss of 0.0282. These results demonstrate that MobileNetV2 is a highly effective encoder backbone for U-Net in extracting tasks for tumor-affected brain regions using multimodal medical imaging data.