Claim Missing Document
Check
Articles

Found 2 Documents
Search

Multi-Modal Graph-Aware Transformer with Contrastive Fusion for Brain Tumor Segmentation Chowdhury, Rini; Kumar, Prashant; Suganthi, R.; Ammu, V.; Evance Leethial, R.; Roopa, C.
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 4 (2025): October
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i4.993

Abstract

Accurate segmentation of brain tumors in MRI images is critical for early diagnosis, surgical planning, and effective treatment strategies. Traditional deep learning models such as U-Net, Attention U-Net, and Swin-U-Net have demonstrated commendable success in tumor segmentation by leveraging Convolutional Neural Networks (CNNs) and transformer-based encoders. However, these models often fall short in effectively capturing complex inter-modality interactions and long-range spatial dependencies, particularly in tumor regions with diffuse or poorly defined boundaries. Additionally, they suffer from limited generalization capabilities and demand substantial computational resources. AIM: To overcome these limitations, a novel approach named Graph-Aware Transformer with Contrastive Fusion (GAT-CF) is introduced. This model enhances segmentation performance by integrating spatial attention mechanisms of transformers with graph-based relational reasoning across multiple MRI modalities, namely T1, T2, FLAIR, and T1CE. The graph-aware structure models inter-slice and intra-slice relationships more effectively, promoting better structural understanding of tumor regions. Furthermore, a multi-modal contrastive learning strategy is employed to align semantic features and distinguish complementary modality-specific information, thereby improving the model’s discriminative power. The fusion of these techniques facilitates improved contextual understanding and more accurate boundary delineation in complex tumor regions. When evaluated on the BraTS2021 dataset, the proposed GAT-CF model achieved a Dice score of 99.1% and an IoU of 98.4%, surpassing the performance of state-of-the-art architectures like Swin-UNet and SegResNet. It also demonstrated superior accuracy in detecting and enhancing tumor voxels and core tumor regions, highlighting its robustness, precision, and potential for clinical adoption in neuroimaging applications
Adaptive Threshold-Enhanced Deep Segmentation of Acute Intracranial Hemorrhage and its Subtypes in Brain CT Images Suganthi, R.; Yalagi, Pratibha C. Kaladeep; Chowdhury, Rini; Kumar, Prashant; Sharmila, D.; Krishna, Kunchanapalli Rama
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 4 (2025): October
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i4.1048

Abstract

Accurate segmentation of acute intracranial haemorrhage (ICH) in brain computed tomography (CT) scans is crucial for timely diagnosis and effective treatment planning. While the RSNA Intracranial Hemorrhage Detection dataset provides a substantial amount of labeled CT data, most prior research has focused on slice-level classification rather than precise pixel-level segmentation. To address this limitation, a novel segmentation pipeline is proposed that combines a 2.5D U-Net architecture with a dynamic adaptive thresholding technique for enhanced delineation of hemorrhagic lesions and their subtypes. The 2.5D U-Net model leverages spatial continuity across adjacent slices to generate initial lesion probability maps, which are subsequently refined using an adaptive thresholding method that adjusts based on local pixel intensity histograms and edge gradients. Unlike fixed global thresholding approaches such as Otsu’s method, the proposed technique dynamically varies thresholds, enabling more accurate differentiation between hemorrhagic tissue and surrounding brain structures, especially in challenging cases with diffuse or overlapping boundaries. The model was evaluated on carefully selected subsets of the RSNA dataset, achieving a mean Dice similarity coefficient of 0.82 across all ICH subtypes. Compared to standard U-Net and DeepLabV3+ architectures, the hybrid approach demonstrated superior accuracy, boundary precision, and fewer false positives. Visual analysis confirmed more precise lesion delineation and better correspondence with manual annotations, particularly in low-contrast or complex anatomical regions. This integrated approach proves effective for robust segmentation in clinical environments. It holds promise for deployment in computer-aided diagnosis systems, providing radiologists and neurosurgeons with a reliable tool for comprehensive ICH assessment and enhanced decision-making during emergency care