Gliomas are one of the most prevalent malignant brain tumors, presenting considerable problems for patient prognosis and therapeutic approaches. Precise segmentation of these tumors is essential for diagnosis, surgical planning, and intraoperative radiologic monitoring (IRM). Defining glioma subregions, including the enhancing tumor (TE), tumor core (TC), and whole tumor (WT), enables targeted therapy and aids in monitoring tumor growth over time. This article presents the 3D U-Net Transformer, a deep learning architecture that integrates convolutional layers with transformer-based self-attention mechanisms. The model efficiently analyzes multimodal MRI scans, utilizing skip connections and attention modules to merge local spatial data with global context, thus improving segmentation performance. The 3D U-Net Transformer, validated on the BraTS 2020 dataset—a benchmark for brain tumor segmentation—surpassed traditional topologies like U-Net and UNet++. The model attained elevated Dice coefficients for TE, TC, and WT areas, accompanied by robust sensitivity and specificity metrics, hence, enhancing its clinical dependability. This sophisticated method enhances surgical procedures and clinical decision-making by providing accurate tumor delineation. The integration of transformer modules within U-Net architectures highlights the possibility of significant advancements in 3D medical imaging and real-time applications. The computer setup for this study comprised a high-performance PC featuring an Intel i7 CPU, 12 GB RAM, x64 architecture, Intel HD Graphics 3000, operating on Kaggle with a P100 GPU, and utilizing Python 3.8, Kaggle, and TensorFlow 2.4 on a 64-bit operating system.
Copyrights © 2026