Claim Missing Document
Check
Articles

Found 3 Documents
Search

Multi-Modal Graph-Aware Transformer with Contrastive Fusion for Brain Tumor Segmentation Chowdhury, Rini; Kumar, Prashant; Suganthi, R.; Ammu, V.; Evance Leethial, R.; Roopa, C.
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 4 (2025): October
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i4.993

Abstract

Accurate segmentation of brain tumors in MRI images is critical for early diagnosis, surgical planning, and effective treatment strategies. Traditional deep learning models such as U-Net, Attention U-Net, and Swin-U-Net have demonstrated commendable success in tumor segmentation by leveraging Convolutional Neural Networks (CNNs) and transformer-based encoders. However, these models often fall short in effectively capturing complex inter-modality interactions and long-range spatial dependencies, particularly in tumor regions with diffuse or poorly defined boundaries. Additionally, they suffer from limited generalization capabilities and demand substantial computational resources. AIM: To overcome these limitations, a novel approach named Graph-Aware Transformer with Contrastive Fusion (GAT-CF) is introduced. This model enhances segmentation performance by integrating spatial attention mechanisms of transformers with graph-based relational reasoning across multiple MRI modalities, namely T1, T2, FLAIR, and T1CE. The graph-aware structure models inter-slice and intra-slice relationships more effectively, promoting better structural understanding of tumor regions. Furthermore, a multi-modal contrastive learning strategy is employed to align semantic features and distinguish complementary modality-specific information, thereby improving the model’s discriminative power. The fusion of these techniques facilitates improved contextual understanding and more accurate boundary delineation in complex tumor regions. When evaluated on the BraTS2021 dataset, the proposed GAT-CF model achieved a Dice score of 99.1% and an IoU of 98.4%, surpassing the performance of state-of-the-art architectures like Swin-UNet and SegResNet. It also demonstrated superior accuracy in detecting and enhancing tumor voxels and core tumor regions, highlighting its robustness, precision, and potential for clinical adoption in neuroimaging applications
Adaptive Threshold-Enhanced Deep Segmentation of Acute Intracranial Hemorrhage and its Subtypes in Brain CT Images Suganthi, R.; Yalagi, Pratibha C. Kaladeep; Chowdhury, Rini; Kumar, Prashant; Sharmila, D.; Krishna, Kunchanapalli Rama
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 4 (2025): October
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i4.1048

Abstract

Accurate segmentation of acute intracranial haemorrhage (ICH) in brain computed tomography (CT) scans is crucial for timely diagnosis and effective treatment planning. While the RSNA Intracranial Hemorrhage Detection dataset provides a substantial amount of labeled CT data, most prior research has focused on slice-level classification rather than precise pixel-level segmentation. To address this limitation, a novel segmentation pipeline is proposed that combines a 2.5D U-Net architecture with a dynamic adaptive thresholding technique for enhanced delineation of hemorrhagic lesions and their subtypes. The 2.5D U-Net model leverages spatial continuity across adjacent slices to generate initial lesion probability maps, which are subsequently refined using an adaptive thresholding method that adjusts based on local pixel intensity histograms and edge gradients. Unlike fixed global thresholding approaches such as Otsu’s method, the proposed technique dynamically varies thresholds, enabling more accurate differentiation between hemorrhagic tissue and surrounding brain structures, especially in challenging cases with diffuse or overlapping boundaries. The model was evaluated on carefully selected subsets of the RSNA dataset, achieving a mean Dice similarity coefficient of 0.82 across all ICH subtypes. Compared to standard U-Net and DeepLabV3+ architectures, the hybrid approach demonstrated superior accuracy, boundary precision, and fewer false positives. Visual analysis confirmed more precise lesion delineation and better correspondence with manual annotations, particularly in low-contrast or complex anatomical regions. This integrated approach proves effective for robust segmentation in clinical environments. It holds promise for deployment in computer-aided diagnosis systems, providing radiologists and neurosurgeons with a reliable tool for comprehensive ICH assessment and enhanced decision-making during emergency care
Hybrid Swarm-Driven Vision Transformer (HSViT) for Lung Cancer Segmentation and Classification from CT Scans V, Kavithamani; Kavya, V.; Suganthi, R.; S., Yuvaraj; Monisha, P.; Arun Patrick
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 8 No 1 (2026): January
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v8i1.1384

Abstract

Lung cancer segmentation and classification from computed tomography (CT) images play a vital role in early diagnosis, prognosis assessment, and effective treatment planning. Despite significant progress in medical image analysis, accurate lung lesion analysis remains highly challenging due to overlapping anatomical structures, heterogeneous tissue intensity distributions, irregular and complex tumor shapes, and poorly defined lesion boundaries. These factors often limit the reliability and generalization capability of conventional deep learning models when applied to real-world clinical data. To address these challenges, this paper proposes a Hybrid Swarm-Driven Vision Transformer (HSViT) framework that synergistically combines swarm intelligence with transformer-based deep learning. The processing pipeline begins with Contrast Limited Adaptive Histogram Equalization (CLAHE), which enhances local contrast while suppressing noise amplification, thereby improving the visibility of subtle pulmonary nodules and lesion regions. Subsequently, a U-Net segmentation model optimized using the Coyote Optimization Algorithm (COA) is employed to accurately delineate lung lesions. COA, a swarm-based metaheuristic, adaptively fine-tunes U-Net parameters, enabling improved convergence and more precise boundary detection compared to gradient-based optimization alone. Following segmentation, discriminative lesion features are extracted and passed to the HSViT classifier. The proposed classifier integrates a Dual-Stage Attention Fusion (DSAF) mechanism, which effectively captures both fine-grained local spatial features and long-range global contextual dependencies. The framework achieves a Dice Coefficient of 0.95, an overall classification accuracy of 98.7%, and a minimized training loss of 0.04. These results highlight the strong potential of HSViT for reliable automated lung cancer diagnosis and for supporting clinical decision-making systems in real-world healthcare environments.