Claim Missing Document
Check
Articles

Found 4 Documents
Search

Ensemble learning based Convolutional Neural Network – Depth Fire for detecting COVID-19 in Chest X-Ray images Chandrika, G Naga; Chowdhury, Rini; Prashant Kumar; K, Sangamithrai; E, Glory; M D, Saranya
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 1 (2025): January
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i1.525

Abstract

The Unique Corona virus-caused COVID19 deadly disease has gave out a significant dispute to healthcare systems around the world. To stop the virus's transmission and lessen its negative effects on public health, it is crucial to recognise correctly and rapidly those who have COVID19. The application of artificial intelligence (AI) holds the capacity to increase the precision and effectiveness of COVID19 diagnosis. The purpose of the study is to build a reliable AI-based model capable correctly detect COVID19 cases from chest X-ray pictures. A dataset of 16,000 chest X-ray pictures, including COVID19 positive and negative instances, is employed in the investigation. Four convolutional neural network (CNN) the models that previously been trained are employed in the proposed model, and the output of each model is combined using an ensembling technique. The major objective of this project is to develop an accurate and reliable AI-based model to classify COVID19 cases from chest X-ray images. The individuality of this method comes in its capacity to employ data augmentation strategies to enhance model generalisation and prevent overfitting. The accuracy and dependability of the model are moreover advanced by utilising numerous pre-trained CNN models and ensembling methods. The suggested AI-based model's classification accuracy for the five classes (bacterial, COVID19 positive, negative, opacity, and viral), the three classes (COVID19 positive, negative, and healthy), and the two classes (COVID19 positive and negative) was 97.3%, 98.2%, 97.6%, and respectively. The projected model performs better in terms of sensitivity, accuracy and specificity than unconventional techniques that are previously in use. Significant ability may be guided in the suggested AI-based model's ability to recognize COVID19 cases quickly and effectively from X-rays of the chest. This approach can help radiology physicians analyse affected role quickly and correctly, improving patient outcomes and lessening the strain on healthcare systems. To ensure the precision of the diagnosis, it is vital to mention that the model's decisions should be made in consultation with a licenced medical expert.
A Novel Encoder Decoder Architecture with Vision Transformer for Medical Image Segmentation Saroj Bala; Arora, Kumud; R, Jeevitha; Chowdhury, Rini; Kumar, Prashant; Nageswari, C.Shobana
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 1 (2025): January
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i1.571

Abstract

Brain tumor image segmentation is one of the most critical tasks in medical imaging for diagnosis, treatment planning, and prognosis. Traditional methods for brain tumor image segmentation are mostly based on Convolution Neural Network (CNN), which have been proved very powerful but still have limitations to effectively capture long-range dependencies and complex spatial hierarchies in MRI images. Variability in the shape, size, and location of tumors may affect the performance and may get stuck into suboptimal outcomes. In these regards, new encoder-decoder architecture with the VisionTranscoder(ViT) is proposed, to enhance brain tumor detection and classification. The proposed VisionTranscoder exploits a transformer's ability in modeling global context through self-attention mechanisms, providing more inclusive interpretation of the intricate patterns in medical images and classification by capturing both local and global features. The proposed VisionTranscoder maintains the Vision Transformer in its encoder for processing images as sequences of patches to capture global dependencies often outside the view of traditional CNNs. Then the segmentation map is rebuilt at a high level of fidelity with the decoder through upsampling and skips connections to maintain detailed spatial information. The risk of overfitting is hugely reduced by design and advanced regularization techniques with extensive data augmentation. The dataset contains 7,023 human brain MRI images, all of which are in four different classes: glioma, meningioma, no tumor, and pituitary. Images from the 'no tumor' class, indicating an MRI scan without any detectable tumor, were taken from the Br35H dataset . The results show the efficiency of VisionTranscoder over a wide set of brain MRI scans, producing an accuracy of 98.5% with a loss of 0.05. This performance underlines the ability of it to accurately segment and classify a brain tumor without overfitting.
Multi-Modal Graph-Aware Transformer with Contrastive Fusion for Brain Tumor Segmentation Chowdhury, Rini; Kumar, Prashant; Suganthi, R.; Ammu, V.; Evance Leethial, R.; Roopa, C.
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 4 (2025): October
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i4.993

Abstract

Accurate segmentation of brain tumors in MRI images is critical for early diagnosis, surgical planning, and effective treatment strategies. Traditional deep learning models such as U-Net, Attention U-Net, and Swin-U-Net have demonstrated commendable success in tumor segmentation by leveraging Convolutional Neural Networks (CNNs) and transformer-based encoders. However, these models often fall short in effectively capturing complex inter-modality interactions and long-range spatial dependencies, particularly in tumor regions with diffuse or poorly defined boundaries. Additionally, they suffer from limited generalization capabilities and demand substantial computational resources. AIM: To overcome these limitations, a novel approach named Graph-Aware Transformer with Contrastive Fusion (GAT-CF) is introduced. This model enhances segmentation performance by integrating spatial attention mechanisms of transformers with graph-based relational reasoning across multiple MRI modalities, namely T1, T2, FLAIR, and T1CE. The graph-aware structure models inter-slice and intra-slice relationships more effectively, promoting better structural understanding of tumor regions. Furthermore, a multi-modal contrastive learning strategy is employed to align semantic features and distinguish complementary modality-specific information, thereby improving the model’s discriminative power. The fusion of these techniques facilitates improved contextual understanding and more accurate boundary delineation in complex tumor regions. When evaluated on the BraTS2021 dataset, the proposed GAT-CF model achieved a Dice score of 99.1% and an IoU of 98.4%, surpassing the performance of state-of-the-art architectures like Swin-UNet and SegResNet. It also demonstrated superior accuracy in detecting and enhancing tumor voxels and core tumor regions, highlighting its robustness, precision, and potential for clinical adoption in neuroimaging applications
Adaptive Threshold-Enhanced Deep Segmentation of Acute Intracranial Hemorrhage and its Subtypes in Brain CT Images Suganthi, R.; Yalagi, Pratibha C. Kaladeep; Chowdhury, Rini; Kumar, Prashant; Sharmila, D.; Krishna, Kunchanapalli Rama
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 4 (2025): October
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i4.1048

Abstract

Accurate segmentation of acute intracranial haemorrhage (ICH) in brain computed tomography (CT) scans is crucial for timely diagnosis and effective treatment planning. While the RSNA Intracranial Hemorrhage Detection dataset provides a substantial amount of labeled CT data, most prior research has focused on slice-level classification rather than precise pixel-level segmentation. To address this limitation, a novel segmentation pipeline is proposed that combines a 2.5D U-Net architecture with a dynamic adaptive thresholding technique for enhanced delineation of hemorrhagic lesions and their subtypes. The 2.5D U-Net model leverages spatial continuity across adjacent slices to generate initial lesion probability maps, which are subsequently refined using an adaptive thresholding method that adjusts based on local pixel intensity histograms and edge gradients. Unlike fixed global thresholding approaches such as Otsu’s method, the proposed technique dynamically varies thresholds, enabling more accurate differentiation between hemorrhagic tissue and surrounding brain structures, especially in challenging cases with diffuse or overlapping boundaries. The model was evaluated on carefully selected subsets of the RSNA dataset, achieving a mean Dice similarity coefficient of 0.82 across all ICH subtypes. Compared to standard U-Net and DeepLabV3+ architectures, the hybrid approach demonstrated superior accuracy, boundary precision, and fewer false positives. Visual analysis confirmed more precise lesion delineation and better correspondence with manual annotations, particularly in low-contrast or complex anatomical regions. This integrated approach proves effective for robust segmentation in clinical environments. It holds promise for deployment in computer-aided diagnosis systems, providing radiologists and neurosurgeons with a reliable tool for comprehensive ICH assessment and enhanced decision-making during emergency care